The Test

To keep the review length manageable we're presenting a subset of our results here. For all benchmark results and even more comparisons be sure to use our performance comparison tool: Bench.

Motherboard: ASUS P8Z68-V Pro (Intel Z68)
ASUS Crosshair V Formula (AMD 990FX)
Intel DX79SI (Intel X79)
Hard Disk: Intel X25-M SSD (80GB)
Crucial RealSSD C300
Memory: 4 x 4GB G.Skill Ripjaws X DDR3-1600 9-9-9-20
Video Card: ATI Radeon HD 5870 (Windows 7)
Video Drivers: AMD Catalyst 11.10 Beta (Windows 7)
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

Cache and Memory Bandwidth Performance

The biggest changes from the original Sandy Bridge are the increased L3 cache size and the quad-channel memory interface. We'll first look at the impact a 15MB L3 has on latency:

Cache/Memory Latency Comparison
  L1 L2 L3 Main Memory
AMD FX-8150 (3.6GHz) 4 21 65 195
AMD Phenom II X4 975 BE (3.6GHz) 3 15 59 182
AMD Phenom II X6 1100T (3.3GHz) 3 14 55 157
Intel Core i5 2500K (3.3GHz) 4 11 25 148
Intel Core i7 3960X (3.3GHz) 4 11 30 167

Cachemem shows us a 5 cycle increase in latency. Hits in L3 can take 20% longer to get to the core that requested the data, if this is correct. For small, lightly threaded applications, you may see a slight regression in performance compared to Sandy Bridge. More likely than not however, the ~2 - 2.5x increase in L3 cache size will more than make up for the added latency. Also note that despite the large cache and thanks to its ring bus, Sandy Bridge E's L3 is still lower latency than Gulftown's.

Memory Bandwidth Comparison - Sandra 2012.01.18.10
  Intel Core i7 3960X (Quad Channel, DDR3-1600) Intel Core i7 2600K (Dual Channel, DDR3-1600) Intel Core i7 990X (Triple Channel, DDR3-1333)
Aggregate Memory Bandwidth 37.0 GB/s 21.2 GB/s 19.9 GB/s

Memory bandwidth is also up significantly. Populating all four channels with DDR3-1600 memory, Sandy Bridge E delivered 37GB/s of bandwidth in Sandra's memory bandwidth test. Given the 51GB/s theoretical max of this configuration and a fairly standard 20% overhead, 37GB/s is just about what we want to see here.

Overclocking Windows 7 Application Performance
Comments Locked

163 Comments

View All Comments

  • DanNeely - Monday, November 14, 2011 - link

    AMD's been selling 6 core Phenom CPUs since April 2010 (6 core opterons launched in jun 09). Prior to SB's launch they were very competitive with intel systems at the same mobo+CPU price points, and while having fallen behind since then are still decent buys for more threaded apps because AMD's slashed prices to compete.

    On the intel side, while hyperthreading isn't 8 real cores for most workloads 8 threads will run significantly faster than 4.
  • ClagMaster - Monday, November 14, 2011 - link

    This Sandy-Bridge-E is really a desktop supercomputer well-suited for engineering workstations that can solve Abequs or Monte Carlo Programs. With that intent, the Xeon brand of this processor, with eight-cores and ECC memory support, is the processor to buy.

    The Xeon will very likely have the SAS support that Anand so laments on a specialty chipset based on the X79. And engineering workstations are not made or broken with lack of native USB 3 controllers.

    DDR3 1333 is not slouch memory. With four channels of the memory there will be much faster memory IO than a two channel system on the i7-2700K with the same memory.

    This Sandy-Bridge-E consumer chip is for those true, frothing, narcisstic enthusiasts who have thousands of USD to burn and want the bragging rights.

    I suppose its their money to waste and their chests to thump.

    As for myself, I would have purchased an ASUS C206 workstation and a E3-1240 Xeon processor.
  • sylar365 - Monday, November 14, 2011 - link

    Everybody is seeing the benchmarks and claiming that this processor is overkill for gaming but aren't all of these "real world" gaming benchmarks run with the game as being the ONLY application open at the time of testing? I understand that you need to reduce the number of variables in order to produce accurate numbers across multiple platforms, but what I really want to know, more than "can it run (insert game) at 60fps" is this:

    Can it run (for instance) Battlefield 3 multiplayer on "High" ALONGSIDE Origin, Chrome, Skype, Pandora One and streaming software while giving a decent stream quality?

    Streaming gameplay has become popular. Justin.tv has made Twitch.tv as a separate site just to handle all of the gamers streaming themselves in gameplay. Streaming software such as Xsplit Broadcaster are doing REAL TIME video encoding of screen captures or Gamesource and then bundling for streaming all in one swoop and ALL WHILE PLAYING THE GAME AT THE SAME TIME. For streamers who count on ad revenue as a source of income it becomes less about Time = Money and more about Quality = Money since everything is required to happen in real time. I happen to know for a fact that a 2500k @ 4.0Ghz chokes on these tasks and it directly impacts the quality of the streaming experience. Don't even get me started on trying to stream Skyrim at 720p, a game that actually taxes the processor. What is the point of running a game at it's highest possible settings at 60fps if the people watching only see something like a watercolor re-imagining at the other end? Once you hurdle bandwidth contraints and networking issues the stream quality is nearly 100% dependent on the processor and it's immediate subsystem. Video cards need not apply here.

    There has got to be a way to determine if multiple programs can be run in different threads efficiently on these modern processors. Or at least a way to see whether or not there would be an advantage to having a 3960x over a 2500k in a situation like I am describing. And I know I can't be the only person who is running more than one program at a time. (Am I?) I mean, I understand that some applications are not coded to benefit from more than one core, but can multi-core or multi-threaded processors even help in situations where you are actually using more than one single threaded (or multi-threaded) application at a time? What would the impact of quad-channel memory be when, heaven forbid, TWO taxing applications are being run at the SAME TIME!? GASP!
  • N4g4rok - Monday, November 14, 2011 - link

    That's a good point, but don't forget that a lot of games are so CPU intensive that it would take more than just background applications to cause the CPU to lose it's performance during gameplay. I can't agree with the statement that streaming video will be completely dependent on the processor. The right software will support hardware acceleration, and would likely tax the GPU just as much as the CPU.

    However, with this processor, and a lot of Intel processors with hyper-threading, you would be sacrificing just a little bit of it's turbo frequency to deal with those background applications. Which should not be a problem for this system.

    Also, keep in mind that benchmarks are just trying to give a general case. if you know how well one application runs, and you know how well another runs, you should be able to come up with a rough idea of how it will handle both of those tasks at the same time. and it's likely that the system running these games is also running necessary background software. you can assume things like Intel's Turbo Boost controller or the GPU driver software, etc. etc.
  • N4g4rok - Monday, November 14, 2011 - link

    "but don't forget that a lot of games are so CPU intensive that it would take more than...."

    My mistake, i meant 'GPU' here.
  • sylar365 - Monday, November 14, 2011 - link

    "The right software will support hardware acceleration, and would likely tax the GPU just as much as the CPU"

    In almost every modern game I wouldn't want my streaming software to utilize the GPU(s) since it is already being fully utilized to make the game run smoothly. Besides, most streaming software I know of doesn't even have the option to use that hardware yet. If it did I suppose you could start looking at Tesla cards just to help process the conversion and encoding of stream video, but then you are talking about multiple thousands of dollars just for the Tesla hardware. You should check out Tom's own BF3 performance review and see how much GPU compute power would be left after getting a smooth experience at 1080p for the local machine. It seems like the 3960x could help. But I will evidently need to take the gamble of spending $xxxx myself since I don't get hardware sent to me for review and no review sites are posting any type of results for using two power hungry applications at the same time.
  • N4g4rok - Tuesday, November 15, 2011 - link

    No kidding.

    Even with it's performance, it's difficult to justify that price.
  • shady28 - Monday, November 14, 2011 - link


    Could rename this article 'Core i7 3960X - Diminishing Returns'

    Not impressed at all with this new chip. Maybe if you're doing a ton of multitasking all time time (like constantly doing background decoding) it would be worth it, but even in the multitasking benchmarks it isn't exactly revolutionary.

    If multitasking is that big of a deal, better off getting a G34 and popping in a pair of 8 or 12 core Magny Cours AMD's. Or, maybe the new 16 Interlagos core G34. Heck, the 16 core is selling for $650 at NewEgg already.

    For anything else, it's really only marginally faster while probably being considerably more expensive.
  • Bochi - Monday, November 14, 2011 - link

    Can we get benchmarks that show the potential impact of the greater CPU power & memory bandwidth? This may be overkill for gaming at 1920 x 1080. However, I would like to know what type of performance changes are possible when it's used on a top end Crossfire or SLI system.
  • rs2 - Monday, November 14, 2011 - link

    "I had to increase core voltage from 1.104V to 1.44V, but the system was stable."

    Surely that is a typo?

Log in

Don't have an account? Sign up now