3DS MAX 2013

Our previous benchmark, the "architecture" scene that is included in the SPEC APC 3DS Max 2007 test, was getting way too old. So we decided to switch to the "Trol_cleric29_max2010" scene while upgrading to 3DS MAX 2013. We render at 1080p (1920x1080) resolution and measure the time it takes to render the first three frames (from 0 to 2). The 64-bit version of 3DS Max 2013 runs on top of 64-bit Windows 2008 R2 SP1. All results are reported as rendered images per hour; higher is thus better.

3DS Max 2013

The results are pretty chaotic at first sight. But the numbers are correct and can be verified by a third party or by yourself for that matter. Let us try to make sense out of this.

First of all, we used the NIVDIA Mental Ray renderer, which despite the "NVIDIA" part in its name is still a CPU only renderer. Secondly, the new benchmark is better than the old one: most of the time all cores are working at very high CPU load: typically 96% and more. However we noticed that without Hyper-Threading and CMT, the CPUs are able to turbo longer and at higher clockspeeds and there are small periods of single threaded action. These two facts together probably explain why disabling Hyper-Threading or CMT improves performance by 20% and more.

Cinebench reports that the Xeon 2660 is 20% faster than the Opteron 6380. In the 3DS Max, the Xeon is up to 77% faster. The new Mental Ray engine seems to be extremely well optimized for the Intel architectures and underperforms on the AMD architecture.

At the end of the day, it is clear that Intel has a huge advantage here, but also that this market is shifting more and more to GPU rendering. This is out of the scope of this article, but many people in the rendering business are using GPU accelerated rendering thanks to NVIDIA's iray renderer. CPU + GPU rendering with iRay seems to outperform Mental Ray in almost all scenes except those with relatively simple lighting, so combining an Intel E5 Xeon with a fast GPU is the best option.

Rendering Performance HPC: LSTC's LS Dyna
Comments Locked

55 Comments

View All Comments

  • coder543 - Wednesday, February 20, 2013 - link

    You realize that we have no trouble recognizing that you've posted about fifty comments that are essentially incompetent racism against AMD, right?

    AMD's processors aren't prefect, but neither are Intel's. And also, AMD, much to your dismay, never announced they were planning to get out of the x86 server market. They'll be joining the ARM server market, but not exclusively. I'm honestly just ready for x86 as a whole to be gone, completely and utterly. It's a horrible CPU architecture, but so much money has been poured into it that it has good performance for now.
  • Duwelon - Thursday, February 21, 2013 - link

    x86 is fine, just fine.
  • coder543 - Wednesday, February 20, 2013 - link

    totes, ain't nobody got time for AMD. they is teh failzor.

    (yeah, that's what I heard when I read your highly misinformed argument.)
  • quiksilvr - Wednesday, February 20, 2013 - link

    Obvious trolling aside, looking at the numbers and its pretty grim. Keep in mind that these are SERVER CPUs. Not only is Intel doing the job faster, its using less energy, and paying a mere $100-$300 more per CPU to cut off on average 20 watts is a no-brainer. These are expected to run 24 hours a day, 7 days a week with no stopping. That power adds up and if AMD has any chance to make any dent in the high end enterprise datacenters they need to push even more.
  • Beenthere - Wednesday, February 20, 2013 - link

    You must be kidding. TCO is what enterprise looks at and $100-$300 more per CPU in addition to the increased cost of Intel based hardware is precisely why AMD is recovering server market share.

    If you do the math you'll find that most servers get upgraded long before the difference in power consumption between an Intel and AMD CPU would pay for itself. The total wattage per CPU is not the actual wattage used under normal operations and AMD has as good or better power saving options in their FX based CPUs as Intel has in IB. The bottom line is those who write the checks are buying AMD again and that's what really counts, in spite of the trolling.

    Rory Read has actually done a decent job so far even though it's not over and it has been painful, especially to see some talent and loyal AMD engineers and execs part ways with the company. This happens in most large company reorganizations and it's unfortunate but unavoidable. Those remaining at AMD seem up for the challenge and some of the fruits of their labor are starting to show with the Jaguar cores. When the Steamroller cores debut later this year, AMD will take another step forward in servers and desktops.
  • Cotita - Wednesday, February 20, 2013 - link

    Most servers have a long life. You'll probably upgrade memory and storage, but CPU is rarely upgraded.
  • Guspaz - Wednesday, February 20, 2013 - link

    Let's assume $0.10 per kilowatt hour. A $100 price difference at 20W would take 1000 kWh, which would take 50,000 hours to produce. The price difference would pay for itself (at $100) in about 6 years.

    So yes, the power savings aren't really enough to justify the cost increase. The higher IPC on the Intel chips, however, might.
  • bsd228 - Wednesday, February 20, 2013 - link

    You're only getting part of the equation here. That extra 20w of power consumed mostly turns into heat, which now must be cooled (requiring more power and more AC infrastructure). Each rack can have over 20 2U servers with two processors each, which means nearly an extra kilowatt per rack, and the corresponding extra heat.

    Also, power costs can vary considerably. I was at a company paying 16-17cents in Oakland, CA. 11 cents in Sacramento, but only 2 cents in Central Washington (hydropower).
  • JonnyDough - Wednesday, February 20, 2013 - link

    +as many as I could give. Best post!
  • Tams80 - Wednesday, February 20, 2013 - link

    I wouldn't even ask the NYSE for the time day.

Log in

Don't have an account? Sign up now