Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

Corona 1.3

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++

POV-Ray 3.7b3

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

 

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

264 Comments

View All Comments

  • Gothmoth - Monday, June 19, 2017 - link

    i don´t care about powerdraw that much if i can COOL the CPU and keep the cooling quiet.

    but in this case the powerdraw is high and the heat is crazy.

    and all because of intel insisting to save a few dollar on a 1000 dollar CPU and use TIM?

    WTF....
  • Ej24 - Monday, June 19, 2017 - link

    I wish amd would have released Threadripper closer to ryzen. That way amd wouldn't make comparisons of ryzen to Intel x99/x299. They kind of shot themselves in the foot. AM4 is only directly comparable to lga115x as a platform. R3, 5 and 7 are only really intended to compete with i3, 5, and 7 consumer parts. Amd simply doubled the core count per dollar at the consumer line. It's merely coincidental at this point that ryzen core count lines up with Intel HEDT. The platforms are not comparablein use case or intent. All these comparisons will be null when Threadripper/x399 is released as that is AMD's answer to x299.
  • Ej24 - Monday, June 19, 2017 - link

    how is the 7740x, 112w tdp only drawing 80w at full load? I understand that tdp isn't power draw but thermal dissipation. However the two values are usually quite close. In my experience, max turbo power consumption surpasses the tdp rating in watts.
    For example, my 88w tdp 4790k consumes 130w at max 4 core turbo. My 4790S a 65w tdp consumes 80w at max 4 core turbo. My 4790t, 45w tdp, consumes 55w at max 4 core turbo. So how is it the 7740x consumed 80W at max utilization??
  • AnandTechReader2017 - Tuesday, June 20, 2017 - link

    Agreed as on http://www.anandtech.com/show/10337/the-intel-broa... the all-core load for the i7 6950X the all-core load is 135W yet on this graph it's 110W. Something is wrong with those load numbers.
  • Ian Cutress - Tuesday, June 20, 2017 - link

    It's consumer silicon running a single notch up the voltage/frequency curve. Probably binned a bit better too. 112W is just a guide to make sure you put a stonking great big cooler on it. But given the efficiency we saw with Kaby Lake-S processors to begin with, it's not that ludicrous.
  • Flying Aardvark - Monday, June 19, 2017 - link

    This is an interesting time (finally), again in CPUs. To answer the question you posed, "Ultimately a user can decide the following". I decided to go mini-ITX this time. Chose Ryzen for this, and initially the 1800X. Had to downgrade to the 1700 due to heat/temps, but overall I don't think anything competes against AMD at all in the Node202 today.

    That's one area where Intel is MIA. Coffeelake will be 6C/12T, 7700K is 4C/8T. R7-1700 is 65W and 8C/16T. Works great. I paired mine with a 1TB 960 Pro and Geforce 1060 Founders Edition.

    If I moved to anything else, it would be all the way to 16C/32T Threadripper. I'm really unimpressed by this new Intel lineup, power consumption and heat are simply out of control. Dead on arrival.
  • Gothmoth - Monday, June 19, 2017 - link

    what mobo and ram did you use? is your ryzen build really stable?

    i need full load stability 24/7.
  • Flying Aardvark - Monday, June 19, 2017 - link

    What, you don't need just 60% stability? Yes it's stable.

    I did have one bluescreen and it was the Nvidia driver. I think it's unlikely most people would run into whatever caused it, because I use a triple monitor setup and lots of programs / input switching, and it crashed upon a DisplayPort redetection.

    I bought the Geforce 1060 because it was the most efficient and well-built blower fan cooled GPU I could find. But buying again, I'd go for the best Radeon 480/580 that I could find.

    I never had a bluescreen for decade running Intel CPUs and AMD GPUs so I dislike changing to AMD CPUs and Nvidia GPUs.. but I think it's safest to run a Radeon. Just less likely to have an issue IMO.
    Other than that, no problems at all. Rock solid stable. I used the Biostar board and G.Skill "Ryzen" RAM kit.
  • Gothmoth - Tuesday, June 20, 2017 - link

    it´s something different if as system is stable for 2-3 hours under load or 24/7 under load.. capiche? :-)
  • Gothmoth - Tuesday, June 20, 2017 - link

    btw... thanks for your answer.

    i use a triple monitor setup and use many programs at once... what sense would a 8-10 core make otherwise. :-)

Log in

Don't have an account? Sign up now