Final Words

In terms of absolute CPU performance, Sandy Bridge doesn't actually move things forward. This isn't another ultra-high-end CPU launch, but rather a refresh for the performance mainstream and below. As one AnandTech editor put it, you get yesterday's performance at a much lower price point. Lynnfield took away a lot of the reason to buy an X58 system as it delivered most of the performance with much more affordable motherboards; Sandy Bridge all but puts the final nail in X58's coffin. Unless you're running a lot of heavily threaded applications, I would recommend a Core i7-2600K over even a Core i7-980X. While six cores are nice, you're better off pocketing the difference in cost and enjoying nearly the same performance across the board (if not better in many cases).

In all but the heaviest threaded applications, Sandy Bridge is the fastest chip on the block—and you get the performance at a fairly reasonable price. The Core i7-2600K is tempting at $317 but the Core i5-2500K is absolutely a steal at $216. You're getting nearly $999 worth of performance at roughly a quarter of the cost. Compared to a Core i5-750/760, you'll get an additional 10-50% performance across the board in existing applications, and all that from a ~25% increase in clock speed. A big portion of what Sandy Bridge delivers is due to architectural enhancements, the type of thing we've come to expect from an Intel tock. Starting with Conroe, repeating with Nehalem, and going strong once more with Sandy Bridge, Intel makes this all seem so very easy.

Despite all of the nastiness Intel introduced by locking/limiting most of the Sandy Bridge CPUs, if you typically spend around $200 on a new CPU then Sandy Bridge is likely a better overclocker than anything you've ever owned before it. The biggest loser in the overclock locks is the Core i3 which now ships completely locked. Thankfully AMD has taken care of the low-end segments very well over the past couple of years. All Intel is doing by enforcing clock locks for these lower end chips is sending potential customers AMD's way.

The Core i3-2100 is still a step forward, but not nearly as much of one as the 2500K. For the most part you're getting a 5-20% increase in performance (although we did notice some 30-40% gains), but you're giving up overclocking as an option. For multithreaded workloads you're better off with an Athlon II X4 645; however, for lightly threaded work or a general purpose PC the Core i3-2100 is likely faster.

If this were a normal CPU, I'd probably end here, but Sandy Bridge is no normal chip. The on-die GPU and Quick Sync are both noteworthy additions. Back in 2006 I wondered if Intel would be able to stick to its aggressive tick-tock cadence. Today there's no question of whether or not Intel can do it. The question now is whether Intel will be able to sustain a similarly aggressive ramp in GPU performance and feature set. Clarkdale/Arrandale were both nice, but they didn't do much to compete with low-end discrete GPUs. Intel's HD Graphics 3000 makes today's $40-$50 discrete GPUs redundant. The problem there is we've never been happy with $40-$50 discrete GPUs for anything but HTPC use. What I really want to see from Ivy Bridge and beyond is the ability to compete with $70 GPUs. Give us that level of performance and then I'll be happy.

The HD Graphics 2000 is not as impressive. It's generally faster than what we had with Clarkdale, but it's not exactly moving the industry forward. Intel should just do away with the 6 EU version, or at least give more desktop SKUs the 3000 GPU. The lack of DX11 is acceptable for SNB consumers but it's—again—not moving the industry forward. I believe Intel does want to take graphics seriously, but I need to see more going forward.

Game developers need to put forth some effort as well. Intel has clearly tried to fix some of its bad reputation this go around, so simply banning SNB graphics from games isn't helping anyone. Hopefully both sides will put in the requisite testing time to actually improve the situation.

Quick Sync is just awesome. It's simply the best way to get videos onto your smartphone or tablet. Not only do you get most if not all of the quality of a software based transcode, you get performance that's better than what high-end discrete GPUs are able to offer. If you do a lot of video transcoding onto portable devices, Sandy Bridge will be worth the upgrade for Quick Sync alone.

For everyone else, Sandy Bridge is easily a no brainer. Unless you already have a high-end Core i7, this is what you'll want to upgrade to.

Power Consumption
Comments Locked

283 Comments

View All Comments

  • auhgnist - Monday, January 17, 2011 - link

    For example, between i3-2100 and i7-2600?
  • timminata - Wednesday, January 19, 2011 - link

    I was wondering, does the integrated GPU provide any benefit if you're using it with a dedicated graphics card anyway (GTX470) or would it just be idle?
  • James5mith - Friday, January 21, 2011 - link

    Just thought I would comment with my experience. I am unable to get bluray playback, or even CableCard TV playback with the Intel integrated graphics on my new I5-2500K w/ Asus Motherboard. Why you ask? The same problem Intel has always had, it doesn't handle the EDID's correctly when there is a receiver in the path between it and the display.

    To be fair, I have an older Westinghouse Monitor, and an Onkyo TX-SR606. But the fact that all I had to do was reinstall my HD5450 (which I wanted to get rid of when I did the update to SandyBridge) and all my problems were gone kind of points to the fact that Intel still hasn't gotten it right when it comes to EDID's, HDCP handshakes, etc.

    So sad too, because otherwise I love the upgraded platform for my HTPC. Just wish I didn't have to add-in the discrete graphics.
  • palenholik - Wednesday, January 26, 2011 - link

    As i could understand from article, you have used just this one software for all these testings. And I understand why. Is it enough to conclude that CUDA causes bad or low picture quality.

    I am very interested and do researches over H.264 and x264 encoding and decoding performance, especially over GPU. I have tested Xilisoft Video Converter 6, that supports CUDA, and i didn't problems with low quality picture when using CUDA. I did these test on nVidia 8600 GT and for TV station that i work for. I was researching for solution to compress video for sending over internet with low or no quality loss.

    So, could it be that Arcsoft Media Converter co-ops bad with CUDA technology?

    And must notice here how well AMD Phenom II x6 performs well comparable to nVidia GTX 460. This means that one could buy MB with integrated graphics and AMD Phenom II x6 and have very good encoding performances in terms of speed and quality. Though, Intel is winner here no doubt, but jumping from sck. to sck. and total platform changing troubles me.

    Nice and very useful article.
  • ellarpc - Wednesday, January 26, 2011 - link

    I'm curious why bad company 2 gets left out of Anand's CPU benchmarks. It seems to be a CPU dependent game. When I play it all four cores are nearly maxed out while my GPU barely reaches 60% usage. Where most other games seem to be the opposite.
  • Kidster3001 - Friday, January 28, 2011 - link

    Nice article. It cleared up much about the new chips I had questions on.

    A suggestion. I have worked in the chip making business. Perhaps you could run an article on how bin-splits and features are affected by yields and defects. Many here seem to believe that all features work on all chips (but the company chooses to disable them) when that is not true. Some features, such as virtualization, are excluded from SKU's for a business reason. These are indeed disabled by the manufacturer inside certain chips (they usually use chips where that feature is defective anyway, but can disable other chips if the market is large enough to sell more). Other features, such as less cache or lower speeds are missing from some SKU's because those chips have a defect which causes that feature to not work or not to run as fast in those chips. Rather than throwing those chips away, companies can sell them at a cheaper price. i.e. Celeron -> 1/2 the cache in the chip doesn't work right so it's disabled.

    It works both ways though. Some of the low end chips must come from better chips that have been down-binned, otherwise there wouldn't be enough low-end chips to go around.
  • katleo123 - Tuesday, February 1, 2011 - link

    It is not expected to compete Core i7 processors to take its place.
    Sandy bridge uses fixed function processing to produce better graphics using the same power consumption as Core i series.
    visit http://www.techreign.com/2010/12/intels-sandy-brid...
  • jmascarenhas - Friday, February 4, 2011 - link

    Problem is we need to choose between using integrated GPU where we have to choose a H67 board or do some over clocking with a P67. I wonder why we have to make this option... this just means that if we dont do gaming and the 3000 is fine we have to go for the H67 and therefore cant OC the processor.....
  • jmascarenhas - Monday, February 7, 2011 - link

    and what about those who want to OC and dont need a dedicated Graphic board??? I understand Intel wanting to get money out of early adopters, but dont count on me.
  • fackamato - Sunday, February 13, 2011 - link

    Get the K version anyway? The internal GPU gets disabled when you use an external GPU AFAIK.

Log in

Don't have an account? Sign up now