Gaming Performance

Chances are that any gamer looking at an IVB-E system is also considering a pretty ridiculous GPU setup. NVIDIA sent along a pair of GeForce GTX Titan GPUs, totalling over 14 billion GPU transistors, to pair with the 4960X to help evaluate its gaming performance. I ran the pair through a bunch of games, all at 1080p and at relatively high settings. In some cases you'll see very obvious GPU limitations, while in other situations we'll see some separation between the CPUs.

I haven't yet integrated this data into Bench, so you'll see a different selection of CPUs here than we've used elsewhere. All of the primary candidates are well represented here. There's Ivy Bridge E and Sandy Bridge E of course, in addition to mainstread IVB/SNB. I threw in Gulftown and Nehalem based parts, as well as AMD's latest Vishera SKUs and an old 6-core Phentom II X6.

Bioshock Infinite

Bioshock Infinite is Irrational Games’ latest entry in the Bioshock franchise. Though it’s based on Unreal Engine 3 – making it our obligatory UE3 game – Irrational had added a number of effects that make the game rather GPU-intensive on its highest settings. As an added bonus it includes a built-in benchmark composed of several scenes, a rarity for UE3 engine games, so we can easily get a good representation of what Bioshock’s performance is like.

We're running the benchmark mode at its highest quality defaults (Ultra DX11) with DDOF enabled.

Bioshock Infinite, GeForce Titan SLI

We're going to see a lot of this I suspect. Whenever we see CPU dependency in games, it tends to manifest as being very dependent on single threaded performance. Here Haswell's architectural advantages are appearent as the two quad-core Haswell parts pull ahead of the 4960X by about 8%. The 4960X does reasonably well but you don't really want to spend $1000 on a CPU just for it to come in 3rd I suppose. With two GPUs, the PCIe lane advantage isn't good for much.

Metro: Last Light

Metro: Last Light is the latest entry in the Metro series of post-apocalyptic shooters by developer 4A Games. Like its processor, Last Light is a game that sets a high bar for visual quality, and at its highest settings an equally high bar for system requirements thanks to its advanced lighting system. We run Metro: LL at its highest quality settings, tesselation set to very high and with 16X AF/SSAA enabled.

Metro:LL, GeForce Titan SLI

The tune shifts a bit with Metro: LL. Here the 4960X actually pulls ahead by a very small amount. In fact, both of the LGA-2011 6-core parts manage very small leads over Haswell here. The differences are small enough to basically be within the margin of error for this benchmark though.

Sleeping Dogs

A Square Enix game, Sleeping Dogs is one of the few open world games to be released with any kind of benchmark, giving us a unique opportunity to benchmark an open world game. Like most console ports, Sleeping Dogs’ base assets are not extremely demanding, but it makes up for it with its interesting anti-aliasing implementation, a mix of FXAA and SSAA that at its highest settings does an impeccable job of removing jaggies. However by effectively rendering the game world multiple times over, it can also require a very powerful video card to drive these high AA modes.

Our test here is run at the game's Extreme Quality defaults.

Sleeping Dogs, GeForce Titan SLI

Sleeping Dogs shows similar behavior of the 4960X making its way to the very top, with Haswell hot on its heels.

Tomb Raider (2013)

The simply titled Tomb Raider is the latest entry in the Tomb Raider franchise, making a clean break from past titles in plot, gameplay, and technology. Tomb Raider games have traditionally been technical marvels and the 2013 iteration is no different. Like all of the other titles here, we ran Tomb Raider at its highest quality (Ultimate) settings. Motion Blur and Screen Effects options were both checked.

Tomb Raider (2013), GeForce Titan SLI

With the exception of the Celeron G540, nearly all of the parts here perform the same. The G540 doesn't do well in any of our tests, I confirmed SLI was operational in all cases but its performance was just abysmal regardless.

Total War: Shogun 2

Our next benchmark is Shogun 2, which is a continuing favorite to our benchmark suite. Total War: Shogun 2 is the latest installment of the long-running Total War series of turn based strategy games, and alongside Civilization V is notable for just how many units it can put on a screen at once. Even 2 years after its release it’s still a very punishing game at its highest settings due to the amount of shading and memory those units require.

We ran Shogun 2 in its DX11 High Quality benchmark mode.

Total War: Shogun 2, GeForce Titan SLI

We see roughly equal performance between IVB-E and Haswell here.


GRID 2 is a new addition to our suite and our new racing game of choice, being the very latest racing game out of genre specialty developer Codemasters. Intel did a lot of publicized work with the developer on this title creating a high performance implementation of Order Independent Transparency for Haswell, so I expect it to be well optimized for Intel architectures.

We ran GRID 2 at Ultra quality defaults.

GRID 2, GeForce Titan SLI

We started with a scenario where Haswell beat out IVB-E, and we're ending with the exact opposite. Here the 10% advantage is likely due to the much larger L3 cache present on both IVB-E and SNB-E. Overall you'll get great gaming performance out of the 4960X, but even with two Titans at its disposal you won't see substantially better frame rates than a 4770K in most cases.

Visual Studio, Photoshop, File Compression & Excel Math Perf Overclocking & Power Consumption


View All Comments

  • cactusdog - Tuesday, September 03, 2013 - link

    IB-E is a massive failure just like SB-E. Thanks Intel for killing the highend for me. Actually, I think this is their plan, to kill the highend. Its ridiculous that this platform is so far behind the mainstream platform. 2x sata 6GB/s ports? No Intel USB 3.0? Worse Single threaded performance than mainstream? Sandy Bridge-e seemed like an unfinished project where many compromises were made and ivy-e looks the same. Reply
  • knweiss - Tuesday, September 03, 2013 - link

    Anand, you write that Corsair supplied 4x 8GB DDR3-1866 Vengeance Pro memory for the testbed. However, you also remark "infrequent instability at stock voltages" with 32 GB. Then, in the legend of memory latency chart, you write "Core i7-4960X (DDR3-1600)" .

    So I wonder which memory configuration was actually used during the benchmarks? Less than 32 GB with DDR3-1866, non-stock voltages, or 32GB DDR3-1600? Wouldn't anything but 4x DDR3-1866 be a little bit unfair because you otherwise don't utilise the full potential of the CPU?
  • bobbozzo - Tuesday, September 03, 2013 - link

    The article says that 1600 is the max memory speed SUPPORTED if you use more than one DIMM per channel. Reply
  • knweiss - Wednesday, September 04, 2013 - link

    There are 4 channels. Reply
  • chizow - Tuesday, September 03, 2013 - link

    Nice job Anand, your conclusion pretty much nailed it as to why LGA2011 doesn't cut it today and why this release is pretty ho-hum in general. I would've liked to have seen some 4820K results in there to better illustrate the difference between 4770K Haswell and SB-E, but I suppose that is limited by what review samples you received.

    But yeah, unless you need the 2 extra cores or need double DIMM capacity, there's not much reason to go LGA2011/IVB-E over Haswell at this point. Even the PCIe lane benefit is hit or miss for Nvidia users, as PCIe 3.0 is not officially supported for Nvidia cards and their reg hack is hit or miss on some boards still.

    The downsides of LGA2011 vs LGA1150 are much greater, imo, as you lose 4 extra SATA3(6G) ports and native USB 3.0 as you covered, along with much lower overall platform power consumption. The SATA3 situation is probably the worst though, as 2 isn't really enough to do much, but 6 opens up the possibility of an SSD boot drive along with a few really fast SATA3 RAID0 arrays.
  • TEAMSWITCHER - Tuesday, September 03, 2013 - link

    I'm really disappointed by these numbers. As a software developer, the FireFox compile benchmark best indicates the benefit I would get from upgrading to this CPU. And, it looks like the 4770K would be about the same difference - except far less expensive. I really don't think I need anything more than 16GB for RAM, and one high-end graphics card is enough to drive my single WQHD (2560x1440) display. Do bragging rights count? No...I mean really? Reply
  • madmilk - Tuesday, September 03, 2013 - link

    Time to consider Xeons... Reply
  • MrBungle123 - Tuesday, September 03, 2013 - link

    What's a Xeon going to do? Be slower than the 4960X? You lose clock speed by going with huge core counts and that translates to even more losses in single threaded performance. There comes a point where there are diminishing returns on adding more cores... (see AMD) Reply
  • MrSpadge - Tuesday, September 03, 2013 - link

    Time to save yourself some money with a 4770 - not the worst news. Reply
  • Kevin G - Wednesday, September 04, 2013 - link

    Your best bet would be to hope for desktop CrystalWell part. That extra cache should do wonders for compile times even if you'd lose a bit of clock speed. However, Intel is intentionally holding back the best socket 1150 parts they could offer as the benefits of Crystalwell + TSX optimized software would put performance into large core count Xeon territory in some cases. Reply

Log in

Don't have an account? Sign up now