3D Rendering Performance

Today's desktop processors are more than fast enough to do professional level 3D rendering at home. To look at performance under 3dsmax we ran the SPECapc 3dsmax 8 benchmark (only the CPU rendering tests) under 3dsmax 9 SP1. The results reported are the rendering composite scores.

3dsmax 9 - SPECapc 3dsmax 8 CPU Test

At the risk of sounding like a broken record, we have a new champ once more. The 2600K is slightly ahead of the 980X here, while the 2500K matches the performance of the i7 975 without Hyper Threading enabled. You really can't beat the performance Intel is offering here.

The i3 2100 is 11% faster than last year's i3 540, and the same performance as the Athlon II X4 645.

Created by the Cinema 4D folks we have Cinebench, a popular 3D rendering benchmark that gives us both single and multi-threaded 3D rendering results.

Cinebench R10 - Single Threaded Test

Single threaded performance sees a huge improvement with Sandy Bridge. Even the Core i3 2100 is faster than the 980X in this test. Regardless of workload, light or heavy, Sandy Bridge is the chip to get.

Cinebench R10 - Multithreaded Test

POV-Ray is a popular, open-source raytracing application that also doubles as a great tool to measure CPU floating point performance.

I ran the SMP benchmark in beta 23 of POV-Ray 3.73. The numbers reported are the final score in pixels per second.

POV-Ray 3.7 Beta Benchmark

Blender 3D Character Render

Video Encoding Performance File Compression/Decompression Performance
Comments Locked

283 Comments

View All Comments

  • GeorgeH - Monday, January 3, 2011 - link

    With the unlocked multipliers, the only substantive difference between the 2500K and the 2600K is hyperthreading. Looking at the benchmarks here, it appears that at equivalent clockspeeds the 2600K might actually perform worse on average than the 2500K, especially if gaming is a high priority.

    A short article running both the 2500K and the 2600K at equal speeds (say "stock" @3.4GHz and overclocked @4.4GHz) might be very interesting, especially as a possible point of comparison for AMD's SMT approach with Bulldozer.

    Right now it looks like if you're not careful you could end up paying ~$100 more for a 2600K instead of a 2500K and end up with worse performance.
  • Gothmoth - Monday, January 3, 2011 - link

    and what benchmarks you are speaking about?

    as anand wrote HT has no negative influence on performance.
  • GeorgeH - Monday, January 3, 2011 - link

    The 2500K is faster in Crysis, Dragon Age, World of Warcraft and Starcraft II, despite being clocked slower than a 2600K. If it weren't for that clockspeed deficiency, it looks like it also might be faster in Left 4 Dead, Far Cry 2, and Dawn of War II. Just about the only game that looks like a "win" for HT is Civ5 and Fallout 3.

    The 2500K also wins the x264 HD 3.03 1st Pass benchmark, and comes pretty close to the 2600K in a few others, again despite a clockspeed deficiency.

    Intel's new "no overclocking unless you get a K" policy looks like it might be a double-edged sword. Ignoring the IGP stuff, the only difference between a 2500K and a 2600K is HT; if you're spending extra for a K you're going to be overclocking, making the 2500K's base clockspeed deficiency irrelevant. That means HT's deficiencies won't be able to hide behind lower clockspeeds and locked multipliers (as with the i5-7xx and i7-8xx.)

    In the past HT was a no-brainer; it might have hurt performance in some cases but it also came with higher clocks that compensated for HT's shortcomings. Now that Intel has cut enthusiasts down to two choices, HT isn't as clear cut, especially if those enthusiasts are gamers - and most of them are.
  • Shorel - Monday, January 3, 2011 - link

    I don't ever watch soap operas (why somebody can enjoy such crap is beyond me) but I game a lot. All my free time is spent gaming.

    High frame rate reminds me of good video cards (or games that are not cutting edge) and the so called film 24p reminds me of the Michael Bay movies where stuff happens fast but you can't see anything, like in transformers.

    Please don't assume that your readers know or enjoy soap operas. Standard TV is for old people and movies look amazing at 120hz when almost all you do is gaming.
  • mmcc575 - Monday, January 3, 2011 - link

    Just want to say thanks for such a great opening article on desktop SNB. The VS2008 benchmark was also a welcome addition!

    SNB launch and CES together must mean a very busy time for you, but it would be great to get some clarification/more in depth articles on a couple of areas.

    1. To clarify, if the LGA-2011 CPUs won't have an on-chip GPU, does this mean they will forego arguably the best feature in Quick Sync?

    2. Would be great to have some more info on the Overclocking of both the CPU and GPU, such as the process, how far you got on stock voltage, the effect on Quick Sync and some OC'd CPU benchmarks.

    3. A look at the PQ of the on-chip GPU when decoding video compared to discrete low-end rivals from nVidia and AMD, as it is likely that the main market for this will be those wanting to decode video as opposed to play games. If you're feeling generous, maybe a run through the HQV benchmark? :P

    Thanks for reading, and congrats again for having the best launch-day content on the web.
  • ajp_anton - Monday, January 3, 2011 - link

    In the Quantum of Solace comparison, x86 and Radeon screens are the same.

    I dug up a ~15Mbit 1080p clip with some action and transcoded it to 4Mbit 720p using x264. So entirely software-based. My i7 920 does 140fps, which isn't too far away from Quick Sync. I'd love to see some quality comparisons between x264 on fastest settings and QS.
  • ajp_anton - Monday, January 3, 2011 - link

    Also, in the Dark Knight comparison, it looks like the Radeon used the wrong levels (so not the encoder's fault). You should recheck the settings used both in the encoder and when you took the screenshot.
  • testmeplz - Monday, January 3, 2011 - link

    Thanks for the great reveiw! I believe the colors in the legend of the graphs on the Graphics overclocking page are mixed up.

    THanks,
    Chris
  • politbureau - Monday, January 3, 2011 - link

    Very concise. Cheers.

    One thing I miss is clock-for-clock benchmarks to highlight the effect of architectural changes. Though not perhaps within the scope of this review, it would nonetheless be interesting to see how SNB fairs against Bloomfield and Lynnfield at similar clock speeds.

    Cheerio
  • René André Poeltl - Monday, January 3, 2011 - link

    Good performance for a bargain - that was amd's terrain.

    Now sandy bridge for ~200 $ targets on amd's clientel. A Core i5-2500K for $216 - that's a bargain. (included is even a 40$ value gpu) And the overclocking ability!

    If I understood it correctly: Intel Core i7 2600K @ 4.4GHz 111W under load is quite efficient. At 3.4 ghz 86 W and a ~30% more 4.4 ghz = ~30% more performance ... that would mean it scales ~ 1:1 power consumption/performance.

    Many people need more performance per core, but not more cores. At 111 W under load this would be the product they wanted. e.g. People who make music with pc's, not playing mp3's but mixing, producing music.

    But for more cores the x6 Thuban is the better choice on a budget. For e.g. building a server on a budget intel has no product to rival it. Or developers - they may also want as many cores as they can get for their apps to test multithreading performance.
    And Amd's also scores with their more conservative approach when it comes to upgrading e.g. motherboards. People don't like to buy a new motherboard every time they upgrade the cpu.

Log in

Don't have an account? Sign up now