Final Words

There are two aspects of today's launch that bother me: the lack of Quick Sync and the chipset. The former is easy to understand. Sandy Bridge E is supposed to be a no-compromise, ultra high-end desktop solution. The lack of an on-die GPU with Quick Sync support means you have to inherently compromise in adopting the platform. I'm not sure what sort of a solution Intel could've come to (I wouldn't want to give up a pair of cores for a GPU+QuickSync) but I don't like performance/functionality tradeoffs with this class of product. Secondly, while I'm not a SAS user, I would've at least appreciated some more 6Gbps SATA ports on the chipset. Native USB 3.0 support would've been nice as well. Instead what we got was effectively a 6-series chipset with a new name. As Intel's flagship chipset, the X79 falls short.


From left to right: Intel Core i7 (SNB-E), Core i7 (Gulftown), Core i5 (SNB), Core i5 (Clarkdale), Core 2 Duo
LGA-2011, 1366, 1155, 1156, 775

The vast majority of desktop users, even enthusiast-class users, will likely have no need for Sandy Bridge E. The Core i7 3960X may be the world's fastest desktop CPU, but it really requires a heavily threaded workload to prove it. What the 3960X doesn't do is make your gaming experience any better or speed up the majority of desktop applications. The 3960X won't be any slower than the fastest Sandy Bridge CPUs, but it won't be tremendously faster either. The desktop market is clearly well served by Intel's LGA-1155 platform (and its lineage); LGA-2011 is simply a platform for users who need a true powerhouse.

There are no surprises there, we came to the same conclusion when we reviewed Intel's first 6-core CPU last year. If you do happen to have a heavily threaded workload that needs the absolute best performance, the Core i7 3960X can deliver. In our most thread heavy tests the 3960X had no problems outpacing the Core i7 2600K by over 50%. If your livelihood depends on it, the 3960X is worth its entry fee. I suspect for those same workloads, the 3930K will be a good balance of price/performance despite having a smaller L3 cache. I'm not terribly interested in next year's Core i7 3820. Its point is obviously for those users who need the memory bandwidth or PCIe lanes of SNB-E, but don't need more than four cores. I would've liked to have seen a value 6-core offering instead, but I guess with a 435mm2 die size it's a tough sell for Intel management.

Of course compute isn't the only advantage of the Sandy Bridge E platform. With eight DIMM slots on most high end LGA-2011 motherboards you'll be able to throw tons of memory at your system if you need it without having to shop for workstation motherboards with fewer frills.

As for the future of the platform, Intel has already begun talking about Ivy Bridge E. If it follows the pattern set for Ivy Bridge on LGA-1155, IVB-E should be a drop in replacement for LGA-2011 motherboards. The biggest issue there is timing. Ivy will arrive for the mainstream LGA-1155 platforms around the middle of 2012. At earliest, I don't know that we'd see it for LGA-2011 until the end of next year, or perhaps even early 2013 given the late launch of SNB-E. This seems to be the long-term downside to these ultra high-end desktop platforms these days: you end up on a delayed release cadence for each tick/tock on the roadmap. If you've always got to have the latest and greatest, this may prove to be frustrating. Based on what we know of Ivy Bridge however, I suspect that if you're using all six of these cores in SNB-E that you'll wish you had IVB-E sooner, but won't be tempted away from the platform by a quad-core Ivy Bridge on LGA-1155. 

I do worry about the long term viability of the ultra high-end desktop platform. As we showed here, some of the gains in threaded apps exceed 50% over a standard Sandy Bridge. That's tangible performance to those who can use it. With the growth in cloud computing it's clear there's demand for these types of chips in servers. I just hope Intel continues to offer a version for desktop users as well.

Overclocked Performance
Comments Locked

163 Comments

View All Comments

  • DanNeely - Monday, November 14, 2011 - link

    AMD's been selling 6 core Phenom CPUs since April 2010 (6 core opterons launched in jun 09). Prior to SB's launch they were very competitive with intel systems at the same mobo+CPU price points, and while having fallen behind since then are still decent buys for more threaded apps because AMD's slashed prices to compete.

    On the intel side, while hyperthreading isn't 8 real cores for most workloads 8 threads will run significantly faster than 4.
  • ClagMaster - Monday, November 14, 2011 - link

    This Sandy-Bridge-E is really a desktop supercomputer well-suited for engineering workstations that can solve Abequs or Monte Carlo Programs. With that intent, the Xeon brand of this processor, with eight-cores and ECC memory support, is the processor to buy.

    The Xeon will very likely have the SAS support that Anand so laments on a specialty chipset based on the X79. And engineering workstations are not made or broken with lack of native USB 3 controllers.

    DDR3 1333 is not slouch memory. With four channels of the memory there will be much faster memory IO than a two channel system on the i7-2700K with the same memory.

    This Sandy-Bridge-E consumer chip is for those true, frothing, narcisstic enthusiasts who have thousands of USD to burn and want the bragging rights.

    I suppose its their money to waste and their chests to thump.

    As for myself, I would have purchased an ASUS C206 workstation and a E3-1240 Xeon processor.
  • sylar365 - Monday, November 14, 2011 - link

    Everybody is seeing the benchmarks and claiming that this processor is overkill for gaming but aren't all of these "real world" gaming benchmarks run with the game as being the ONLY application open at the time of testing? I understand that you need to reduce the number of variables in order to produce accurate numbers across multiple platforms, but what I really want to know, more than "can it run (insert game) at 60fps" is this:

    Can it run (for instance) Battlefield 3 multiplayer on "High" ALONGSIDE Origin, Chrome, Skype, Pandora One and streaming software while giving a decent stream quality?

    Streaming gameplay has become popular. Justin.tv has made Twitch.tv as a separate site just to handle all of the gamers streaming themselves in gameplay. Streaming software such as Xsplit Broadcaster are doing REAL TIME video encoding of screen captures or Gamesource and then bundling for streaming all in one swoop and ALL WHILE PLAYING THE GAME AT THE SAME TIME. For streamers who count on ad revenue as a source of income it becomes less about Time = Money and more about Quality = Money since everything is required to happen in real time. I happen to know for a fact that a 2500k @ 4.0Ghz chokes on these tasks and it directly impacts the quality of the streaming experience. Don't even get me started on trying to stream Skyrim at 720p, a game that actually taxes the processor. What is the point of running a game at it's highest possible settings at 60fps if the people watching only see something like a watercolor re-imagining at the other end? Once you hurdle bandwidth contraints and networking issues the stream quality is nearly 100% dependent on the processor and it's immediate subsystem. Video cards need not apply here.

    There has got to be a way to determine if multiple programs can be run in different threads efficiently on these modern processors. Or at least a way to see whether or not there would be an advantage to having a 3960x over a 2500k in a situation like I am describing. And I know I can't be the only person who is running more than one program at a time. (Am I?) I mean, I understand that some applications are not coded to benefit from more than one core, but can multi-core or multi-threaded processors even help in situations where you are actually using more than one single threaded (or multi-threaded) application at a time? What would the impact of quad-channel memory be when, heaven forbid, TWO taxing applications are being run at the SAME TIME!? GASP!
  • N4g4rok - Monday, November 14, 2011 - link

    That's a good point, but don't forget that a lot of games are so CPU intensive that it would take more than just background applications to cause the CPU to lose it's performance during gameplay. I can't agree with the statement that streaming video will be completely dependent on the processor. The right software will support hardware acceleration, and would likely tax the GPU just as much as the CPU.

    However, with this processor, and a lot of Intel processors with hyper-threading, you would be sacrificing just a little bit of it's turbo frequency to deal with those background applications. Which should not be a problem for this system.

    Also, keep in mind that benchmarks are just trying to give a general case. if you know how well one application runs, and you know how well another runs, you should be able to come up with a rough idea of how it will handle both of those tasks at the same time. and it's likely that the system running these games is also running necessary background software. you can assume things like Intel's Turbo Boost controller or the GPU driver software, etc. etc.
  • N4g4rok - Monday, November 14, 2011 - link

    "but don't forget that a lot of games are so CPU intensive that it would take more than...."

    My mistake, i meant 'GPU' here.
  • sylar365 - Monday, November 14, 2011 - link

    "The right software will support hardware acceleration, and would likely tax the GPU just as much as the CPU"

    In almost every modern game I wouldn't want my streaming software to utilize the GPU(s) since it is already being fully utilized to make the game run smoothly. Besides, most streaming software I know of doesn't even have the option to use that hardware yet. If it did I suppose you could start looking at Tesla cards just to help process the conversion and encoding of stream video, but then you are talking about multiple thousands of dollars just for the Tesla hardware. You should check out Tom's own BF3 performance review and see how much GPU compute power would be left after getting a smooth experience at 1080p for the local machine. It seems like the 3960x could help. But I will evidently need to take the gamble of spending $xxxx myself since I don't get hardware sent to me for review and no review sites are posting any type of results for using two power hungry applications at the same time.
  • N4g4rok - Tuesday, November 15, 2011 - link

    No kidding.

    Even with it's performance, it's difficult to justify that price.
  • shady28 - Monday, November 14, 2011 - link


    Could rename this article 'Core i7 3960X - Diminishing Returns'

    Not impressed at all with this new chip. Maybe if you're doing a ton of multitasking all time time (like constantly doing background decoding) it would be worth it, but even in the multitasking benchmarks it isn't exactly revolutionary.

    If multitasking is that big of a deal, better off getting a G34 and popping in a pair of 8 or 12 core Magny Cours AMD's. Or, maybe the new 16 Interlagos core G34. Heck, the 16 core is selling for $650 at NewEgg already.

    For anything else, it's really only marginally faster while probably being considerably more expensive.
  • Bochi - Monday, November 14, 2011 - link

    Can we get benchmarks that show the potential impact of the greater CPU power & memory bandwidth? This may be overkill for gaming at 1920 x 1080. However, I would like to know what type of performance changes are possible when it's used on a top end Crossfire or SLI system.
  • rs2 - Monday, November 14, 2011 - link

    "I had to increase core voltage from 1.104V to 1.44V, but the system was stable."

    Surely that is a typo?

Log in

Don't have an account? Sign up now