3DS MAX 2013

Our previous benchmark, the "architecture" scene that is included in the SPEC APC 3DS Max 2007 test, was getting way too old. So we decided to switch to the "Trol_cleric29_max2010" scene while upgrading to 3DS MAX 2013. We render at 1080p (1920x1080) resolution and measure the time it takes to render the first three frames (from 0 to 2). The 64-bit version of 3DS Max 2013 runs on top of 64-bit Windows 2008 R2 SP1. All results are reported as rendered images per hour; higher is thus better.

3DS Max 2013

The results are pretty chaotic at first sight. But the numbers are correct and can be verified by a third party or by yourself for that matter. Let us try to make sense out of this.

First of all, we used the NIVDIA Mental Ray renderer, which despite the "NVIDIA" part in its name is still a CPU only renderer. Secondly, the new benchmark is better than the old one: most of the time all cores are working at very high CPU load: typically 96% and more. However we noticed that without Hyper-Threading and CMT, the CPUs are able to turbo longer and at higher clockspeeds and there are small periods of single threaded action. These two facts together probably explain why disabling Hyper-Threading or CMT improves performance by 20% and more.

Cinebench reports that the Xeon 2660 is 20% faster than the Opteron 6380. In the 3DS Max, the Xeon is up to 77% faster. The new Mental Ray engine seems to be extremely well optimized for the Intel architectures and underperforms on the AMD architecture.

At the end of the day, it is clear that Intel has a huge advantage here, but also that this market is shifting more and more to GPU rendering. This is out of the scope of this article, but many people in the rendering business are using GPU accelerated rendering thanks to NVIDIA's iray renderer. CPU + GPU rendering with iRay seems to outperform Mental Ray in almost all scenes except those with relatively simple lighting, so combining an Intel E5 Xeon with a fast GPU is the best option.

Rendering Performance HPC: LSTC's LS Dyna
Comments Locked

55 Comments

View All Comments

  • coder543 - Wednesday, February 20, 2013 - link

    99%? I love your highly scientific numbers. and yes, of course only Intel can design a perfect processor. I'm glad you were here to let everyone know.

    To quote Abraham Lincoln, (no, not really) "All of our servers run Intel. Everything AMD makes is no better than British tea."
  • Tams80 - Wednesday, February 20, 2013 - link

    How much are Intel paying you? XD

    Seriously though; you've gone through the entire comments* posting walls of texts that add little to the discussion. Not only that, but your posts are a little offensive.

    *I realise I'm being hypocritical here.
  • JKflipflop98 - Wednesday, February 20, 2013 - link

    Well, Intel does pay me and I'll be the first to say these chips are lookin pretty good in comparison with their previous generation counterparts. Good value for the money for sure.

    As Anand says, however, HPC users are usually after the "extreme" ends of the scale. They're either after max performance or max performance to fit into a certain power/heat envelope. In either case, we win.
  • Tams80 - Wednesday, February 20, 2013 - link

    I'm sure you know what I mean. It wasn't exactly high brow humour.

    They certainly do look good, especially for a company that has already invested in AMD chips. Intel may well be better in both use cases, but at least AMD are providing competent competition.
  • tech6 - Wednesday, February 20, 2013 - link

    The AMD 6x000 series has always looked nearly competitive on paper but is nowhere near Intel performance and efficiency. We have 3 data centers and one is running a mix of 6100 and 6200 Opterons while the others a re older Xeon 7300s and new E5 Xeons. In terms of single threaded and total performance of the 6x00 series cannot keep up with even old 7300 Xeons and can't touch the E5s. What AMD needs is a 30-40% boost in real world performance before they could be considered competitive. AMD also needs better relations with VMWare to optimize memory management on that platform.

    The price difference won't help them as the cost for a data center host is mostly software and can be $15 vCloud and $10K hardware. That reduces the cost advantage to 5% but delivers worse performance and uses more power.

    Most data centers are looking to get the most from their VMWare investments while reducing power consumption and these AMDs do neither.
  • duploxxx - Wednesday, February 20, 2013 - link

    interesting information, but hard to catch if you don't add some figures and real data.

    Firsts of all the 7300 series had huge disadvantages with there FSB, so mentioning that these are way faster then the 6100-6200 opteron series is debatable. I 100% tend to disagree and we had severe Vmware performance issues on these machines on our highend applications.

    i'll just used anandtech as a refference:
    http://www.anandtech.com/show/2851/8
    http://www.it.anandtech.com/show/2978/amd-s-12-cor...

    even the 7400 series are a dog against opteron 8000 series and they are way older and slower against the 6000 series.

    for the E5 you have a point there, often the E5 series show a higher responsive platform, but once you load real life applications within hypervisor and they are starting to hit those HT cores we have seen several degraded performance within our datacenters, this is not really resulted into the anandtech VAPU's scores due to some sw within the benchmark that provides some code optimised results for the intels (the web servers) hence the higher score.

    The 6200 series did showe some response disadvantages but many things have to do with configuration of bios and power profiles in both server and hypervisor. might want to blame the setup rather then the servers. so for 6200 series we actually bought a 10% higher clock speed version to cover that, but reduced that again now with 6300 series.
  • silverblue - Wednesday, February 20, 2013 - link

    I'm going to go trawl the internet (note I said trawl, not troll - very important to bear in mind) for articles on FX CPUs resulting in PCs dying... nope, no matches. Funny, huh?

    I've also run a search concerning AMD CPUs producing incorrect results and crashing; any such occurrences would be the results of design bugs which, I must point out, are not limited to AMD. Nehalem had a bug causing spurious interrupts that locks up the hypervisor on Windows Server 2008 R2, for example. Core 2 had a huge list of bugs.
  • Shadowmaster625 - Wednesday, February 20, 2013 - link

    It is hard to disagree with the statement, knowing how overpaid US IT professionals are. But I just want to point out that this mentality is one of the reasons IT is being outsourced at a furious rate. Keep that in mind before you go blaming someone else for US jobs being lost.

    This meager cost savings may not matter here, but what about some company in Asia? They might actually bite on a few hundred dollar savings, especially if they are ordering quantities in the hundreds. In that case, $300 becomes $30,000. Which might be more than they spend on the people who deploy those servers.
  • ExarKun333 - Wednesday, February 20, 2013 - link

    Outsourced work isn't much cheaper these days and the workers are of much less quality, on a whole.
  • sherlockwing - Wednesday, February 20, 2013 - link

    Except in Asia( especially developing countries) the cost of electricity is a lot higher due to rapidly expanding industry,population & lacking power plants.

Log in

Don't have an account? Sign up now