Rendering: Blender 2.5 Alpha 2

Blender 2.5 Alpha 2
Operating System Windows 2008 Enterprise R2 (64-bit)
Software Blender 2.5 Alpha 2
Benchmark software Built-in render engine

 

3dsmax 2010 crashed on almost all our servers. Granted, it is not meant to be run on a server but on a workstation. We’ll try some tests with Backburner later when the 2011 version is available. In the meantime, it is time for something less bloated and especially less expensive: Blender.

Blender has been getting a lot of positive attention and judging by its very fast growing community it is on its way to become one of the most popular 3D animation packages out there. The current stable version 2.49 can only render up to 8 threads. Blender 2.5 alpha 2 can go up to 64. To our surprise, the software was pretty stable, so we went ahead and started testing.

If you like, you can perform this benchmark very easily too. We used the “metallic robot”, a scene with rather complex lighting (reflections!) and raytracing. To make the benchmark more repetitive, we changed the following parameters:

  1. The resolution was set to 2560 x 1600
  2. Anti-alias was set to 16
  3. We disabled compositing in post processing
  4. Tiles were set to 8x8 (X=8, Y=8)
  5. Threads was set to auto (one thread per CPU is set).

Let us first check out the results on Windows 2008 R2:

Blender 2.5 Alpha 2 Windows

At first the Opteron 6174 results were simply horrible: 44.6 seconds, slower than the dual Opteron six-core!

Ivan Paulos Tomé, the official maintainer of the Brazilian Blender 3D Wiki, gave us some interesting advice. The default number of tiles is apparently set of 5x5. This result in a short period of 100% CPU load on the Opteron 6174 and a long period where the CPU load drops below 30%. We first assumed that 8x6, two times as many tiles as the number of CPUs would be best. After some experimenting, we found that 8x8 is the best for all machines. The Xeons and six-core Opterons gained 10%, while the 12-core Opteron became 40% (!) faster. This underlines that the more cores you have, the harder they are to make good use of.

Blender can be run on several operating systems, so let us see what happens under 64 bit Linux (Suse SLES 11).

Rendering: Blender 2.5 Alpha 2 on SLES 11

Blender 2.5 Alpha 2
Operating System SUSE SLES 11, Linux Kernel 2.6.27.19-5-default SMP
Software Blender 2.5 Alpha 2
Benchmark software Built-in render engine

 

Blender 2.5 Alpha 2 Linux

What happened here? Not only is Blender 50 to 70% faster on Linux, the tables have turned. As the software is still in Alpha 2 phase, it is good to take the results with a grain of salt, but still. For some reason, the Linux version is capable of keeping the cores fed much longer. On Windows, the first half of the benchmark is spent at 100% CPU load, and then it quickly goes down to 75, 50 and even 25% CPU load. In Linux, the CPU load, especially on the Opteron 6174 stays at 99-100% for much longer.

So is the Opteron 6174 the one to get? We are not sure. If these benchmarks are still accurate when we test with the final 2.5 version, there is a good chance that the octal-core 6136 2.4 GHz will be the Blender champion. It has a much lower price and slightly higher performance per core for less complex rendering work. We hope to follow up with new benchmarks. It is pretty amazing what Blender does with a massive number of cores. At the same time, we imagine Intel's engineers will quickly find out why the blender engine fails to make good use of the the dual Xeon X5670's 24 logical cores. This is far from over yet…

Rendering: Cinebench 11.5 OLTP benchmark Oracle Charbench “Calling Circle” 
Comments Locked

58 Comments

View All Comments

  • Accord99 - Monday, March 29, 2010 - link

    The X5670 is 6-core.
  • JackPack - Tuesday, March 30, 2010 - link

    LOL. Based on price?

    Sorry, but you do realize that the majority of these 6-core SKUs will be sold to customers where the CPU represents a small fraction of the system cost?

    We're talking $40,000 to $60,000 for a chassis and four fully loaded blades. A couple hundred dollars difference for the processor means nothing. What's important is the performance and the RAS features.
  • JohanAnandtech - Tuesday, March 30, 2010 - link

    Good post. Indeed, many enthusiast don't fully understand how it works in the IT world. Some parts of the market are very price sensitive and will look at a few hundreds of dollars more (like HPC, rendering, webhosting), as the price per server is low. A large part of the market won't care at all. If you are paying $30K for a software license, you are not going to notice a few hundred dollars on the CPUs.
  • Sahrin - Tuesday, March 30, 2010 - link

    If that's true, then why did you benchmark the slower parts at all? If it only matters in HPC, then why test it in database? Why would the IDM's spend time and money binning CPU's?

    Responding with "Product differentiation and IDM/OEM price spreads" simply means that it *does* matter from a price perspetive.
  • rbbot - Saturday, July 10, 2010 - link

    Because those of us with applications running on older machines need comparisons against older systems in order to determine whether it is worth migrating existing applications to a new platform. Personally, I'd like to see more comparisons to even older kit in the 2-3 year range that more people will be upgrading from.
  • Calin - Monday, March 29, 2010 - link

    Some programs were licensed by physical processor chips, others were licensed by logical cores. Is this still correct, and if so, could you explain in based on the software used for benchmarking?
  • AmdInside - Monday, March 29, 2010 - link

    Can we get any Photoshop benchmarks?
  • JohanAnandtech - Monday, March 29, 2010 - link

    I have to check, but I doubt that besides a very exotic operation anything is going to scale beyond 4-8 cores. These CPUs are not made for Photoshop IMHO.
  • AssBall - Tuesday, March 30, 2010 - link

    Not sure why you would be running photoshop on a high end server.
  • Nockeln - Tuesday, March 30, 2010 - link

    I would recommend trying to apply some advanced filters on a 200+ GB file.

    Especially with the new higher megapixel cameras I could easilly see how some proffesionals would fork up the cash if this reduces the time they have to spend in front of the screen waiting on things to process.


Log in

Don't have an account? Sign up now