Final Words

In terms of absolute CPU performance, Sandy Bridge doesn't actually move things forward. This isn't another ultra-high-end CPU launch, but rather a refresh for the performance mainstream and below. As one AnandTech editor put it, you get yesterday's performance at a much lower price point. Lynnfield took away a lot of the reason to buy an X58 system as it delivered most of the performance with much more affordable motherboards; Sandy Bridge all but puts the final nail in X58's coffin. Unless you're running a lot of heavily threaded applications, I would recommend a Core i7-2600K over even a Core i7-980X. While six cores are nice, you're better off pocketing the difference in cost and enjoying nearly the same performance across the board (if not better in many cases).

In all but the heaviest threaded applications, Sandy Bridge is the fastest chip on the block—and you get the performance at a fairly reasonable price. The Core i7-2600K is tempting at $317 but the Core i5-2500K is absolutely a steal at $216. You're getting nearly $999 worth of performance at roughly a quarter of the cost. Compared to a Core i5-750/760, you'll get an additional 10-50% performance across the board in existing applications, and all that from a ~25% increase in clock speed. A big portion of what Sandy Bridge delivers is due to architectural enhancements, the type of thing we've come to expect from an Intel tock. Starting with Conroe, repeating with Nehalem, and going strong once more with Sandy Bridge, Intel makes this all seem so very easy.

Despite all of the nastiness Intel introduced by locking/limiting most of the Sandy Bridge CPUs, if you typically spend around $200 on a new CPU then Sandy Bridge is likely a better overclocker than anything you've ever owned before it. The biggest loser in the overclock locks is the Core i3 which now ships completely locked. Thankfully AMD has taken care of the low-end segments very well over the past couple of years. All Intel is doing by enforcing clock locks for these lower end chips is sending potential customers AMD's way.

The Core i3-2100 is still a step forward, but not nearly as much of one as the 2500K. For the most part you're getting a 5-20% increase in performance (although we did notice some 30-40% gains), but you're giving up overclocking as an option. For multithreaded workloads you're better off with an Athlon II X4 645; however, for lightly threaded work or a general purpose PC the Core i3-2100 is likely faster.

If this were a normal CPU, I'd probably end here, but Sandy Bridge is no normal chip. The on-die GPU and Quick Sync are both noteworthy additions. Back in 2006 I wondered if Intel would be able to stick to its aggressive tick-tock cadence. Today there's no question of whether or not Intel can do it. The question now is whether Intel will be able to sustain a similarly aggressive ramp in GPU performance and feature set. Clarkdale/Arrandale were both nice, but they didn't do much to compete with low-end discrete GPUs. Intel's HD Graphics 3000 makes today's $40-$50 discrete GPUs redundant. The problem there is we've never been happy with $40-$50 discrete GPUs for anything but HTPC use. What I really want to see from Ivy Bridge and beyond is the ability to compete with $70 GPUs. Give us that level of performance and then I'll be happy.

The HD Graphics 2000 is not as impressive. It's generally faster than what we had with Clarkdale, but it's not exactly moving the industry forward. Intel should just do away with the 6 EU version, or at least give more desktop SKUs the 3000 GPU. The lack of DX11 is acceptable for SNB consumers but it's—again—not moving the industry forward. I believe Intel does want to take graphics seriously, but I need to see more going forward.

Game developers need to put forth some effort as well. Intel has clearly tried to fix some of its bad reputation this go around, so simply banning SNB graphics from games isn't helping anyone. Hopefully both sides will put in the requisite testing time to actually improve the situation.

Quick Sync is just awesome. It's simply the best way to get videos onto your smartphone or tablet. Not only do you get most if not all of the quality of a software based transcode, you get performance that's better than what high-end discrete GPUs are able to offer. If you do a lot of video transcoding onto portable devices, Sandy Bridge will be worth the upgrade for Quick Sync alone.

For everyone else, Sandy Bridge is easily a no brainer. Unless you already have a high-end Core i7, this is what you'll want to upgrade to.

Power Consumption
Comments Locked

283 Comments

View All Comments

  • GeorgeH - Monday, January 3, 2011 - link

    With the unlocked multipliers, the only substantive difference between the 2500K and the 2600K is hyperthreading. Looking at the benchmarks here, it appears that at equivalent clockspeeds the 2600K might actually perform worse on average than the 2500K, especially if gaming is a high priority.

    A short article running both the 2500K and the 2600K at equal speeds (say "stock" @3.4GHz and overclocked @4.4GHz) might be very interesting, especially as a possible point of comparison for AMD's SMT approach with Bulldozer.

    Right now it looks like if you're not careful you could end up paying ~$100 more for a 2600K instead of a 2500K and end up with worse performance.
  • Gothmoth - Monday, January 3, 2011 - link

    and what benchmarks you are speaking about?

    as anand wrote HT has no negative influence on performance.
  • GeorgeH - Monday, January 3, 2011 - link

    The 2500K is faster in Crysis, Dragon Age, World of Warcraft and Starcraft II, despite being clocked slower than a 2600K. If it weren't for that clockspeed deficiency, it looks like it also might be faster in Left 4 Dead, Far Cry 2, and Dawn of War II. Just about the only game that looks like a "win" for HT is Civ5 and Fallout 3.

    The 2500K also wins the x264 HD 3.03 1st Pass benchmark, and comes pretty close to the 2600K in a few others, again despite a clockspeed deficiency.

    Intel's new "no overclocking unless you get a K" policy looks like it might be a double-edged sword. Ignoring the IGP stuff, the only difference between a 2500K and a 2600K is HT; if you're spending extra for a K you're going to be overclocking, making the 2500K's base clockspeed deficiency irrelevant. That means HT's deficiencies won't be able to hide behind lower clockspeeds and locked multipliers (as with the i5-7xx and i7-8xx.)

    In the past HT was a no-brainer; it might have hurt performance in some cases but it also came with higher clocks that compensated for HT's shortcomings. Now that Intel has cut enthusiasts down to two choices, HT isn't as clear cut, especially if those enthusiasts are gamers - and most of them are.
  • Shorel - Monday, January 3, 2011 - link

    I don't ever watch soap operas (why somebody can enjoy such crap is beyond me) but I game a lot. All my free time is spent gaming.

    High frame rate reminds me of good video cards (or games that are not cutting edge) and the so called film 24p reminds me of the Michael Bay movies where stuff happens fast but you can't see anything, like in transformers.

    Please don't assume that your readers know or enjoy soap operas. Standard TV is for old people and movies look amazing at 120hz when almost all you do is gaming.
  • mmcc575 - Monday, January 3, 2011 - link

    Just want to say thanks for such a great opening article on desktop SNB. The VS2008 benchmark was also a welcome addition!

    SNB launch and CES together must mean a very busy time for you, but it would be great to get some clarification/more in depth articles on a couple of areas.

    1. To clarify, if the LGA-2011 CPUs won't have an on-chip GPU, does this mean they will forego arguably the best feature in Quick Sync?

    2. Would be great to have some more info on the Overclocking of both the CPU and GPU, such as the process, how far you got on stock voltage, the effect on Quick Sync and some OC'd CPU benchmarks.

    3. A look at the PQ of the on-chip GPU when decoding video compared to discrete low-end rivals from nVidia and AMD, as it is likely that the main market for this will be those wanting to decode video as opposed to play games. If you're feeling generous, maybe a run through the HQV benchmark? :P

    Thanks for reading, and congrats again for having the best launch-day content on the web.
  • ajp_anton - Monday, January 3, 2011 - link

    In the Quantum of Solace comparison, x86 and Radeon screens are the same.

    I dug up a ~15Mbit 1080p clip with some action and transcoded it to 4Mbit 720p using x264. So entirely software-based. My i7 920 does 140fps, which isn't too far away from Quick Sync. I'd love to see some quality comparisons between x264 on fastest settings and QS.
  • ajp_anton - Monday, January 3, 2011 - link

    Also, in the Dark Knight comparison, it looks like the Radeon used the wrong levels (so not the encoder's fault). You should recheck the settings used both in the encoder and when you took the screenshot.
  • testmeplz - Monday, January 3, 2011 - link

    Thanks for the great reveiw! I believe the colors in the legend of the graphs on the Graphics overclocking page are mixed up.

    THanks,
    Chris
  • politbureau - Monday, January 3, 2011 - link

    Very concise. Cheers.

    One thing I miss is clock-for-clock benchmarks to highlight the effect of architectural changes. Though not perhaps within the scope of this review, it would nonetheless be interesting to see how SNB fairs against Bloomfield and Lynnfield at similar clock speeds.

    Cheerio
  • René André Poeltl - Monday, January 3, 2011 - link

    Good performance for a bargain - that was amd's terrain.

    Now sandy bridge for ~200 $ targets on amd's clientel. A Core i5-2500K for $216 - that's a bargain. (included is even a 40$ value gpu) And the overclocking ability!

    If I understood it correctly: Intel Core i7 2600K @ 4.4GHz 111W under load is quite efficient. At 3.4 ghz 86 W and a ~30% more 4.4 ghz = ~30% more performance ... that would mean it scales ~ 1:1 power consumption/performance.

    Many people need more performance per core, but not more cores. At 111 W under load this would be the product they wanted. e.g. People who make music with pc's, not playing mp3's but mixing, producing music.

    But for more cores the x6 Thuban is the better choice on a budget. For e.g. building a server on a budget intel has no product to rival it. Or developers - they may also want as many cores as they can get for their apps to test multithreading performance.
    And Amd's also scores with their more conservative approach when it comes to upgrading e.g. motherboards. People don't like to buy a new motherboard every time they upgrade the cpu.

Log in

Don't have an account? Sign up now