General Performance: SYSMark 2007

Our journey starts with SYSMark 2007, the only all-encompassing performance suite in our review today. The idea here is simple: one benchmark to indicate the overall performance of your machine. SYSMark 2007 ends up being more of a dual-core benchmark as the applications/workload show minimal use of more than two threads.

SYSMark 2007

The 2600K is our new champion, the $317 chip is faster than Intel's Core i7 980X here as SYSMark 2007 doesn't really do much with the latter's extra 2 cores. Even the 2500K is a hair faster than the 980X. Compared to the Core i5 750, the upgrade is a no brainer - Sandy Bridge is around 20% faster at the same price point as Lynnfield.

Compared to Clarkdale, the Core i3 2100 only manages a 5% advantage howeer.

Adobe Photoshop CS4 Performance

To measure performance under Photoshop CS4 we turn to the Retouch Artists’ Speed Test. The test does basic photo editing; there are a couple of color space conversions, many layer creations, color curve adjustment, image and canvas size adjustment, unsharp mask, and finally a gaussian blur performed on the entire image.

The whole process is timed and thanks to the use of Intel's X25-M SSD as our test bed hard drive, performance is far more predictable than back when we used to test on mechanical disks.

Time is reported in seconds and the lower numbers mean better performance. The test is multithreaded and can hit all four cores in a quad-core machine.

Adobe Photoshop CS4 - Retouch Artists Benchmark

Once again, we have a new king - the 2600K is 9.7% faster than the 980X in our Photoshop CS4 test and the 2500K is just about equal to it. The Core i3 2100 does much better compared to the i3 540, outpacing it by around 30% and nearly equaling the performance of AMD's Phenom II X6 1100T.

The Test Video Encoding Performance
Comments Locked

283 Comments

View All Comments

  • GeorgeH - Monday, January 3, 2011 - link

    With the unlocked multipliers, the only substantive difference between the 2500K and the 2600K is hyperthreading. Looking at the benchmarks here, it appears that at equivalent clockspeeds the 2600K might actually perform worse on average than the 2500K, especially if gaming is a high priority.

    A short article running both the 2500K and the 2600K at equal speeds (say "stock" @3.4GHz and overclocked @4.4GHz) might be very interesting, especially as a possible point of comparison for AMD's SMT approach with Bulldozer.

    Right now it looks like if you're not careful you could end up paying ~$100 more for a 2600K instead of a 2500K and end up with worse performance.
  • Gothmoth - Monday, January 3, 2011 - link

    and what benchmarks you are speaking about?

    as anand wrote HT has no negative influence on performance.
  • GeorgeH - Monday, January 3, 2011 - link

    The 2500K is faster in Crysis, Dragon Age, World of Warcraft and Starcraft II, despite being clocked slower than a 2600K. If it weren't for that clockspeed deficiency, it looks like it also might be faster in Left 4 Dead, Far Cry 2, and Dawn of War II. Just about the only game that looks like a "win" for HT is Civ5 and Fallout 3.

    The 2500K also wins the x264 HD 3.03 1st Pass benchmark, and comes pretty close to the 2600K in a few others, again despite a clockspeed deficiency.

    Intel's new "no overclocking unless you get a K" policy looks like it might be a double-edged sword. Ignoring the IGP stuff, the only difference between a 2500K and a 2600K is HT; if you're spending extra for a K you're going to be overclocking, making the 2500K's base clockspeed deficiency irrelevant. That means HT's deficiencies won't be able to hide behind lower clockspeeds and locked multipliers (as with the i5-7xx and i7-8xx.)

    In the past HT was a no-brainer; it might have hurt performance in some cases but it also came with higher clocks that compensated for HT's shortcomings. Now that Intel has cut enthusiasts down to two choices, HT isn't as clear cut, especially if those enthusiasts are gamers - and most of them are.
  • Shorel - Monday, January 3, 2011 - link

    I don't ever watch soap operas (why somebody can enjoy such crap is beyond me) but I game a lot. All my free time is spent gaming.

    High frame rate reminds me of good video cards (or games that are not cutting edge) and the so called film 24p reminds me of the Michael Bay movies where stuff happens fast but you can't see anything, like in transformers.

    Please don't assume that your readers know or enjoy soap operas. Standard TV is for old people and movies look amazing at 120hz when almost all you do is gaming.
  • mmcc575 - Monday, January 3, 2011 - link

    Just want to say thanks for such a great opening article on desktop SNB. The VS2008 benchmark was also a welcome addition!

    SNB launch and CES together must mean a very busy time for you, but it would be great to get some clarification/more in depth articles on a couple of areas.

    1. To clarify, if the LGA-2011 CPUs won't have an on-chip GPU, does this mean they will forego arguably the best feature in Quick Sync?

    2. Would be great to have some more info on the Overclocking of both the CPU and GPU, such as the process, how far you got on stock voltage, the effect on Quick Sync and some OC'd CPU benchmarks.

    3. A look at the PQ of the on-chip GPU when decoding video compared to discrete low-end rivals from nVidia and AMD, as it is likely that the main market for this will be those wanting to decode video as opposed to play games. If you're feeling generous, maybe a run through the HQV benchmark? :P

    Thanks for reading, and congrats again for having the best launch-day content on the web.
  • ajp_anton - Monday, January 3, 2011 - link

    In the Quantum of Solace comparison, x86 and Radeon screens are the same.

    I dug up a ~15Mbit 1080p clip with some action and transcoded it to 4Mbit 720p using x264. So entirely software-based. My i7 920 does 140fps, which isn't too far away from Quick Sync. I'd love to see some quality comparisons between x264 on fastest settings and QS.
  • ajp_anton - Monday, January 3, 2011 - link

    Also, in the Dark Knight comparison, it looks like the Radeon used the wrong levels (so not the encoder's fault). You should recheck the settings used both in the encoder and when you took the screenshot.
  • testmeplz - Monday, January 3, 2011 - link

    Thanks for the great reveiw! I believe the colors in the legend of the graphs on the Graphics overclocking page are mixed up.

    THanks,
    Chris
  • politbureau - Monday, January 3, 2011 - link

    Very concise. Cheers.

    One thing I miss is clock-for-clock benchmarks to highlight the effect of architectural changes. Though not perhaps within the scope of this review, it would nonetheless be interesting to see how SNB fairs against Bloomfield and Lynnfield at similar clock speeds.

    Cheerio
  • René André Poeltl - Monday, January 3, 2011 - link

    Good performance for a bargain - that was amd's terrain.

    Now sandy bridge for ~200 $ targets on amd's clientel. A Core i5-2500K for $216 - that's a bargain. (included is even a 40$ value gpu) And the overclocking ability!

    If I understood it correctly: Intel Core i7 2600K @ 4.4GHz 111W under load is quite efficient. At 3.4 ghz 86 W and a ~30% more 4.4 ghz = ~30% more performance ... that would mean it scales ~ 1:1 power consumption/performance.

    Many people need more performance per core, but not more cores. At 111 W under load this would be the product they wanted. e.g. People who make music with pc's, not playing mp3's but mixing, producing music.

    But for more cores the x6 Thuban is the better choice on a budget. For e.g. building a server on a budget intel has no product to rival it. Or developers - they may also want as many cores as they can get for their apps to test multithreading performance.
    And Amd's also scores with their more conservative approach when it comes to upgrading e.g. motherboards. People don't like to buy a new motherboard every time they upgrade the cpu.

Log in

Don't have an account? Sign up now