The Test

To keep the review length manageable we're presenting a subset of our results here. For all benchmark results and even more comparisons be sure to use our performance comparison tool: Bench.

Motherboard: ASUS P8Z68-V Pro (Intel Z68)
ASUS Crosshair V Formula (AMD 990FX)
Intel DX79SI (Intel X79)
Hard Disk: Intel X25-M SSD (80GB)
Crucial RealSSD C300
Memory: 4 x 4GB G.Skill Ripjaws X DDR3-1600 9-9-9-20
Video Card: ATI Radeon HD 5870 (Windows 7)
Video Drivers: AMD Catalyst 11.10 Beta (Windows 7)
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

Cache and Memory Bandwidth Performance

The biggest changes from the original Sandy Bridge are the increased L3 cache size and the quad-channel memory interface. We'll first look at the impact a 15MB L3 has on latency:

Cache/Memory Latency Comparison
  L1 L2 L3 Main Memory
AMD FX-8150 (3.6GHz) 4 21 65 195
AMD Phenom II X4 975 BE (3.6GHz) 3 15 59 182
AMD Phenom II X6 1100T (3.3GHz) 3 14 55 157
Intel Core i5 2500K (3.3GHz) 4 11 25 148
Intel Core i7 3960X (3.3GHz) 4 11 30 167

Cachemem shows us a 5 cycle increase in latency. Hits in L3 can take 20% longer to get to the core that requested the data, if this is correct. For small, lightly threaded applications, you may see a slight regression in performance compared to Sandy Bridge. More likely than not however, the ~2 - 2.5x increase in L3 cache size will more than make up for the added latency. Also note that despite the large cache and thanks to its ring bus, Sandy Bridge E's L3 is still lower latency than Gulftown's.

Memory Bandwidth Comparison - Sandra 2012.01.18.10
  Intel Core i7 3960X (Quad Channel, DDR3-1600) Intel Core i7 2600K (Dual Channel, DDR3-1600) Intel Core i7 990X (Triple Channel, DDR3-1333)
Aggregate Memory Bandwidth 37.0 GB/s 21.2 GB/s 19.9 GB/s

Memory bandwidth is also up significantly. Populating all four channels with DDR3-1600 memory, Sandy Bridge E delivered 37GB/s of bandwidth in Sandra's memory bandwidth test. Given the 51GB/s theoretical max of this configuration and a fairly standard 20% overhead, 37GB/s is just about what we want to see here.

Overclocking Windows 7 Application Performance
Comments Locked

163 Comments

View All Comments

  • JlHADJOE - Tuesday, November 15, 2011 - link

    On Page 2, 'The Pros and Cons':
    > Intel's current RST (Rapid Story Technology) drivers don't support X79,

    Rapid Storage, perhaps?
  • jmelgaard - Tuesday, November 15, 2011 - link

    Computers are only getting faster one way today, and that is more cores, designing for up to a strict number of cores is merely stupidity in today's world.

    That said, developing games that support multiple cores might be somewhat more difficult than designing highly concurrent applications that processes data or request for data. (I can't say for sure as I have only briefly touched the game development part of the industry, but I work with the other part on a daily basis)

    But while you might save development cost right now going down that road, you will spent the savings ones you suddenly have to think 8 cores in.

    Carrying technical debt is never a good thing (And designing with a set number of cores in mind can to my programming experience only add that), it will only get more expensive to remove down the road, that has been proven to be true again and again.

    And that is even considering that Frostbite 3 might be developed from the ground up, they still have to think up the concept again, while had they gone for high concurrency, then that concept would already be in place for the next version.
  • TC2 - Tuesday, November 15, 2011 - link

    note,
    BD 4x2bc ~ 2B elements, 315mm2
    SB-E 6x2hc ~ 2.27B elements ~ +14%, 435mm2 ~ +38% (includes unused space for 2 more cores), up to 15MB cache, ...

    impressive at all!
  • C300fans - Tuesday, November 15, 2011 - link

    Intel Gulftown 6C 32nm 6 1.17B 240mm2
    Intel Sandy Bridge E (6C) 32nm 6 2.27B 435mm2

    I dont see any impressive thing. any performance improves?
  • Blaster1618 - Tuesday, November 15, 2011 - link

    Given QPI @ 3.2 Ghz 205 Gb/s (25.6 GB/s) also handled the PCI load, can't we have something in the middle. I'm still a little confused is DMI 2.0 still just mainly simple parallel interface where QPI is a high speed series interface?
  • C300fans - Tuesday, November 15, 2011 - link

    Just imagine DMI 1.0 is a 4pcs pci-e 1x 1.0.
    DMI 2.0 is a 4pcs pci-e 1x 2.0
  • jmelgaard - Tuesday, November 15, 2011 - link

    Clearly you didn't read a single of my points, or simply lack the understanding.

    Applications are not developed to target specific cores, you OS handles all that, it is a simple matter of pushing out jobs in threads or processes.

    Processing in 10, 100 or 1000 threads/processes is no more difficult than doing it in 4... it just requires you have enough "JOBS" to process (and that term was deliberately chosen)...

    This requires a different mindset though, and this might be more difficult to think of games that way right now, mostly because they have been use to running everything in that single game loop, but doing it now could be a rather good ROI down the road.
  • DarkUltra - Tuesday, November 15, 2011 - link

    How about overclocking with turbo boost enabled? I mean, if the 3960X is stable at 4.4GHz, can it be stable at 4.8GHz when games or applications only use four cores? Then it would overclock and perform as good as a 2600K with four heavy threads.
  • yankeeDDL - Tuesday, November 15, 2011 - link

    Guys, there are always people with more money than brain that will purchase just about anything.
    That's not the point. Having the fastest CPU makes it a status symbol and whoever makes it can have the luxury to price it in the $1000 range, for fools to buy.
    I don't know about CPUs, but I do know that the top performing GPUs (HD6990 and GTX590) are sold in extremely low volumes, both because of the relatively low ROI, both because the market is so little that inventory are scarce to begin with.
    So, you may be right on the CPU side, but in general, you're both wrong.

    This said, my point was that if AMD had performed and delivered a good CPU, instead of the FX8150, OR, the FX8150 at a good price point ($170, not $279), then Intel would have had a tougher time in pushing out the 3960X for this price, AND, it would have had to work harder on the chipset. However, because of the huge lead it has over AMD, Intel now can comfortably rebrand a "mid range" chipset and shove it to the customer who has no choice but take it if they want the best CPU.
  • retoureddy - Wednesday, November 16, 2011 - link

    I agree on the fact that only 2 6GB SATA ports are a disappointment. Interesting though is to run two SSD in RAID 0 on the intel controller. With two Kingston SSD I manage real good figures (Crystal Disk Mark) : 4000MB test -> 1040MB/s Read and 621MB/s Write in (SEQ) / 675 and 481 (512K) / 28 and 253 (4K) / 279 and 405 4K QD32. I never managed this kind of throughput on the Z68 or P67 on-board controllers. These numbers are getting close to hardware RAID controllers like ARECA and LSI. I would have been interested to see where the bottleneck lies if X79 would have had more ports. Even though X58 is 3GB Sata you had no problem bottle-necking the Intel RAID controller at around 800MB/s.

Log in

Don't have an account? Sign up now