CPU Tests: Rendering

Rendering tests, compared to others, are often a little more simple to digest and automate. All the tests put out some sort of score or time, usually in an obtainable way that makes it fairly easy to extract. These tests are some of the most strenuous in our list, due to the highly threaded nature of rendering and ray-tracing, and can draw a lot of power. If a system is not properly configured to deal with the thermal requirements of the processor, the rendering benchmarks is where it would show most easily as the frequency drops over a sustained period of time. Most benchmarks in this case are re-run several times, and the key to this is having an appropriate idle/wait time between benchmarks to allow for temperatures to normalize from the last test.

Blender 2.83 LTS: Link

One of the popular tools for rendering is Blender, with it being a public open source project that anyone in the animation industry can get involved in. This extends to conferences, use in films and VR, with a dedicated Blender Institute, and everything you might expect from a professional software package (except perhaps a professional grade support package). With it being open-source, studios can customize it in as many ways as they need to get the results they require. It ends up being a big optimization target for both Intel and AMD in this regard.

For benchmarking purposes, we fell back to one rendering a frame from a detailed project. Most reviews, as we have done in the past, focus on one of the classic Blender renders, known as BMW_27. It can take anywhere from a few minutes to almost an hour on a regular system. However now that Blender has moved onto a Long Term Support model (LTS) with the latest 2.83 release, we decided to go for something different.

We use this scene, called PartyTug at 6AM by Ian Hubert, which is the official image of Blender 2.83. It is 44.3 MB in size, and uses some of the more modern compute properties of Blender. As it is more complex than the BMW scene, but uses different aspects of the compute model, time to process is roughly similar to before. We loop the scene for at least 10 minutes, taking the average time of the completions taken. Blender offers a command-line tool for batch commands, and we redirect the output into a text file.

(4-1) Blender 2.83 Custom Render Test

 

Corona 1.3: Link

Corona is billed as a popular high-performance photorealistic rendering engine for 3ds Max, with development for Cinema 4D support as well. In order to promote the software, the developers produced a downloadable benchmark on the 1.3 version of the software, with a ray-traced scene involving a military vehicle and a lot of foliage. The software does multiple passes, calculating the scene, geometry, preconditioning and rendering, with performance measured in the time to finish the benchmark (the official metric used on their website) or in rays per second (the metric we use to offer a more linear scale).

The standard benchmark provided by Corona is interface driven: the scene is calculated and displayed in front of the user, with the ability to upload the result to their online database. We got in contact with the developers, who provided us with a non-interface version that allowed for command-line entry and retrieval of the results very easily.  We loop around the benchmark five times, waiting 60 seconds between each, and taking an overall average. The time to run this benchmark can be around 10 minutes on a Core i9, up to over an hour on a quad-core 2014 AMD processor or dual-core Pentium.

(4-2) Corona 1.3 Benchmark

 

POV-Ray 3.7.1: Link

A long time benchmark staple, POV-Ray is another rendering program that is well known to load up every single thread in a system, regardless of cache and memory levels. After a long period of POV-Ray 3.7 being the latest official release, when AMD launched Ryzen the POV-Ray codebase suddenly saw a range of activity from both AMD and Intel, knowing that the software (with the built-in benchmark) would be an optimization tool for the hardware.

We had to stick a flag in the sand when it came to selecting the version that was fair to both AMD and Intel, and still relevant to end-users. Version 3.7.1 fixes a significant bug in the early 2017 code that was advised against in both Intel and AMD manuals regarding to write-after-read, leading to a nice performance boost.

The benchmark can take over 20 minutes on a slow system with few cores, or around a minute or two on a fast system, or seconds with a dual high-core count EPYC. Because POV-Ray draws a large amount of power and current, it is important to make sure the cooling is sufficient here and the system stays in its high-power state. Using a motherboard with a poor power-delivery and low airflow could create an issue that won’t be obvious in some CPU positioning if the power limit only causes a 100 MHz drop as it changes P-states.

(4-4) POV-Ray 3.7.1

V-Ray: Link

We have a couple of renderers and ray tracers in our suite already, however V-Ray’s benchmark came through for a requested benchmark enough for us to roll it into our suite. Built by ChaosGroup, V-Ray is a 3D rendering package compatible with a number of popular commercial imaging applications, such as 3ds Max, Maya, Undreal, Cinema 4D, and Blender.

We run the standard standalone benchmark application, but in an automated fashion to pull out the result in the form of kilosamples/second. We run the test six times and take an average of the valid results.

(4-5) V-Ray Renderer

 

Cinebench R20: Link

Another common stable of a benchmark suite is Cinebench. Based on Cinema4D, Cinebench is a purpose built benchmark machine that renders a scene with both single and multi-threaded options. The scene is identical in both cases. The R20 version means that it targets Cinema 4D R20, a slightly older version of the software which is currently on version R21. Cinebench R20 was launched given that the R15 version had been out a long time, and despite the difference between the benchmark and the latest version of the software on which it is based, Cinebench results are often quoted a lot in marketing materials.

Results for Cinebench R20 are not comparable to R15 or older, because both the scene being used is different, but also the updates in the code bath. The results are output as a score from the software, which is directly proportional to the time taken. Using the benchmark flags for single CPU and multi-CPU workloads, we run the software from the command line which opens the test, runs it, and dumps the result into the console which is redirected to a text file. The test is repeated for a minimum of 10 minutes for both ST and MT, and then the runs averaged.

(4-6a) CineBench R20 Single Thread(4-6b) CineBench R20 Multi-Thread

 

CPU Tests: Simulation CPU Tests: Encoding
Comments Locked

229 Comments

View All Comments

  • Bagheera - Tuesday, May 18, 2021 - link

    Intel isn't gonna have enough EUV in time to ramp 7nm by 2023. they are in serious trouble and floating on borrowed time, most analysts just aren't aware.
    https://semiwiki.com/forum/index.php?threads/will-...

    Intel's 10nm is indeed competitive with TSMC 7nm in terms of density, but AMD will be moving to 5nm with Zen 4 next year, what can Intel's response be? They can increase outsourcing to TSMC but that means less utilization of their own fabs which is bad. They absolutely won't be able to get 7nm ready in time to compete with AMD on TSMC 5nm. It will be back to the status quo of Intel lagging behind AMD by one full node, and likely foregoing power efficiency for performance parity.
  • Bagheera - Tuesday, May 18, 2021 - link

    no actual semiconductor professional expected Intel 10nm to surpass TSMC 7nm in any tangible way. the only people who expected otherwise are uniformed enthusiasts (usually gamers, who the to be partial to Intel)

    The gap will only widen from here. Intel really shot itself in the foot with bad EUV planning.
    https://semiwiki.com/semiconductor-services/ic-kno...
  • watzupken - Tuesday, May 18, 2021 - link

    I feel this review concludes that Intel have effectively lost their competitive edge when their fab started to lag behind. In fact, its also conclusive that the SuperFin is really nothing super at all even when compared to TSMC's 7nm. Its just 10nm on steroids just like what they have been doing to their 14nm. From an architect standpoint, Willow Cove is decent, but the bulk of the performance is due to pushing for very high clock speed at the expense of very high power consumption. If this was released on a desktop, it will be a hit. But on mobile, I don't think one can easily find a laptop that have the cooling capability to tame the heat output and also maintain a decent battery life. Especially this processor will likely be paired with a high end GPU. To me, this is a worrying trend for Intel because they will likely have to stick around with 10nm for another couple of years at least. If their new CPU architect is unable to provide decent IPC gains without bursting the power limit, they will surely be in trouble, especially when AMD's 5nm chips may appear in the market first.
  • mode_13h - Tuesday, May 18, 2021 - link

    > If this was released on a desktop, it will be a hit.

    Yes.

    > I don't think one can easily find a laptop that have the cooling capability
    > to tame the heat output and also maintain a decent battery life.

    At 35 W, it would probably make a fine laptop. Unfortunately, competitive pressure is pushing Intel to juice their CPUs more than they really should.
  • sandeep_r_89 - Tuesday, May 18, 2021 - link

    Can you please please stop using the word BIOS for modern devices? Pretty much all devices have been on UEFI only for several years now.
  • Silver5urfer - Tuesday, May 18, 2021 - link

    Ah the M1 fastest CPU ever, doesn't make it to SMT SPEC scores for some reason, like always. Don't worry we will see the Apple CPU which would be X version of the chip iteration when it finally catches up to the SMT of these SMT until then M1 is the best CPU ever.

    TGL machines will throttle to peak with the thin and light garbage heatsinks. That's a given, people should stop buying these parts. Laptop batteries will be destroyed eventually and none of them will have the Dell Desktop Power plan only Workstations have that feature (Lenovo and Dell), Alienware used to have, not sure about now their A51M R1 and R2 also they had their GFX modules smoke, anyways the battery won't be available for the end user to service and the expensive machine will die and BGA with soldered HW to further limit everything, add the overheating NVMe SSDs due to poor ventilation, happens in Alienware machines too which are targeted as maximum performance.
  • Spunjji - Thursday, May 20, 2021 - link

    🤪🤡😤🤬🤥💩
  • mode_13h - Friday, May 21, 2021 - link

    Oof. Looks like *someone* is giving Emojipedia a workout!
    : )
  • Spunjji - Tuesday, May 18, 2021 - link

    This ended up how I was expecting - superior single-core performance where there's thermal headroom, dropping down to broadly competitive multi-performance at the rated TDP, and with a faintly ludicrous maximum power draw under all-core boost.

    I'm glad it's competitive. That's needed. What I'm a little less glad about is that we're almost certainly in for another round of CPU performance varying *wildly* between different designs, which has been true to some extent for a while, but getting steadily worse ever since Ice Lake showed up.

    Given most OEMs' approach to cooling, I'd wager that the average device shipping with Cezanne will provide better CPU performance than the average device with Tiger 45 simply because of Cezanne's greater efficiency.
  • tekit - Tuesday, May 18, 2021 - link

    Heard they enabled undervolting again for tiger lake-h, can anyone confirm? I wonder how much undervolting potential there is and if that could balance the equation against AMD.

Log in

Don't have an account? Sign up now