Gaming Performance

Chances are that any gamer looking at an IVB-E system is also considering a pretty ridiculous GPU setup. NVIDIA sent along a pair of GeForce GTX Titan GPUs, totalling over 14 billion GPU transistors, to pair with the 4960X to help evaluate its gaming performance. I ran the pair through a bunch of games, all at 1080p and at relatively high settings. In some cases you'll see very obvious GPU limitations, while in other situations we'll see some separation between the CPUs.

I haven't yet integrated this data into Bench, so you'll see a different selection of CPUs here than we've used elsewhere. All of the primary candidates are well represented here. There's Ivy Bridge E and Sandy Bridge E of course, in addition to mainstread IVB/SNB. I threw in Gulftown and Nehalem based parts, as well as AMD's latest Vishera SKUs and an old 6-core Phentom II X6.

Bioshock Infinite

Bioshock Infinite is Irrational Games’ latest entry in the Bioshock franchise. Though it’s based on Unreal Engine 3 – making it our obligatory UE3 game – Irrational had added a number of effects that make the game rather GPU-intensive on its highest settings. As an added bonus it includes a built-in benchmark composed of several scenes, a rarity for UE3 engine games, so we can easily get a good representation of what Bioshock’s performance is like.

We're running the benchmark mode at its highest quality defaults (Ultra DX11) with DDOF enabled.

Bioshock Infinite, GeForce Titan SLI

We're going to see a lot of this I suspect. Whenever we see CPU dependency in games, it tends to manifest as being very dependent on single threaded performance. Here Haswell's architectural advantages are appearent as the two quad-core Haswell parts pull ahead of the 4960X by about 8%. The 4960X does reasonably well but you don't really want to spend $1000 on a CPU just for it to come in 3rd I suppose. With two GPUs, the PCIe lane advantage isn't good for much.

Metro: Last Light

Metro: Last Light is the latest entry in the Metro series of post-apocalyptic shooters by developer 4A Games. Like its processor, Last Light is a game that sets a high bar for visual quality, and at its highest settings an equally high bar for system requirements thanks to its advanced lighting system. We run Metro: LL at its highest quality settings, tesselation set to very high and with 16X AF/SSAA enabled.

Metro:LL, GeForce Titan SLI

The tune shifts a bit with Metro: LL. Here the 4960X actually pulls ahead by a very small amount. In fact, both of the LGA-2011 6-core parts manage very small leads over Haswell here. The differences are small enough to basically be within the margin of error for this benchmark though.

Sleeping Dogs

A Square Enix game, Sleeping Dogs is one of the few open world games to be released with any kind of benchmark, giving us a unique opportunity to benchmark an open world game. Like most console ports, Sleeping Dogs’ base assets are not extremely demanding, but it makes up for it with its interesting anti-aliasing implementation, a mix of FXAA and SSAA that at its highest settings does an impeccable job of removing jaggies. However by effectively rendering the game world multiple times over, it can also require a very powerful video card to drive these high AA modes.

Our test here is run at the game's Extreme Quality defaults.

Sleeping Dogs, GeForce Titan SLI

Sleeping Dogs shows similar behavior of the 4960X making its way to the very top, with Haswell hot on its heels.

Tomb Raider (2013)

The simply titled Tomb Raider is the latest entry in the Tomb Raider franchise, making a clean break from past titles in plot, gameplay, and technology. Tomb Raider games have traditionally been technical marvels and the 2013 iteration is no different. Like all of the other titles here, we ran Tomb Raider at its highest quality (Ultimate) settings. Motion Blur and Screen Effects options were both checked.

Tomb Raider (2013), GeForce Titan SLI

With the exception of the Celeron G540, nearly all of the parts here perform the same. The G540 doesn't do well in any of our tests, I confirmed SLI was operational in all cases but its performance was just abysmal regardless.

Total War: Shogun 2

Our next benchmark is Shogun 2, which is a continuing favorite to our benchmark suite. Total War: Shogun 2 is the latest installment of the long-running Total War series of turn based strategy games, and alongside Civilization V is notable for just how many units it can put on a screen at once. Even 2 years after its release it’s still a very punishing game at its highest settings due to the amount of shading and memory those units require.

We ran Shogun 2 in its DX11 High Quality benchmark mode.

Total War: Shogun 2, GeForce Titan SLI

We see roughly equal performance between IVB-E and Haswell here.

GRID 2

GRID 2 is a new addition to our suite and our new racing game of choice, being the very latest racing game out of genre specialty developer Codemasters. Intel did a lot of publicized work with the developer on this title creating a high performance implementation of Order Independent Transparency for Haswell, so I expect it to be well optimized for Intel architectures.

We ran GRID 2 at Ultra quality defaults.

GRID 2, GeForce Titan SLI

We started with a scenario where Haswell beat out IVB-E, and we're ending with the exact opposite. Here the 10% advantage is likely due to the much larger L3 cache present on both IVB-E and SNB-E. Overall you'll get great gaming performance out of the 4960X, but even with two Titans at its disposal you won't see substantially better frame rates than a 4770K in most cases.

Visual Studio, Photoshop, File Compression & Excel Math Perf Overclocking & Power Consumption
Comments Locked

120 Comments

View All Comments

  • ShieTar - Tuesday, September 3, 2013 - link

    Whats the point? A 10-core only runs at 2GHz, and a 8-core only runs at 3 GHz, so both have less overall performance than a 6-core overclocked to more than 4GHz. You simply cannot put more computing power into a reasonable power envelope for a single socket. If a water-cooled Enthusiast 6-core is not enough for your needs, you automatically need a 2-socket system.

    And its not like that is not feasible for enthusiasts. The ASUS Z9PE-D8 WS, the EVGA Classified SR-X and the Supermicro X9DAE are mainboard aiming at the enthusiast / workstation market, combining two sockets for XEON-26xx with the capability to run GPUs in SLI/CrossFire. And if you are looking to spend significantly more than 1k$ for a CPU, the 400$ on those boards and the extra cost for ECC Memory should not scare you either.

    Just go and check Anandtech own benchmarking: http://www.anandtech.com/show/6808/westmereep-to-s... . It's clear that you need two 8-cores to be faster then the enthusiast 6-cores even before overclocking is taken into account.

    Maybe with Haswell-E we can get 8 cores with >3.5GHz into <130W, but with Ivy Bridge, there is simply no point.
  • f0d - Tuesday, September 3, 2013 - link

    who cares if the power envelope is "reasonable"?
    i already have my SBE overclocked to 5.125Ghz and if they release a 10core i would oc that thing like a mutha******

    that link you posted is EXACTLY why i want a 10/12 core instead of dual socket (which i could afford if it made sense performance wise) - its obvious that video encoding doesnt work well with NUMA and dual sockets but it does work well with multi cored single cpu's

    so i say give me a 10 core and let me OC it like crazy - i dont care if it ends up using 350W+ i have some pretty insane watercooling to suck it up (3k ultra kaze's in push/pull on a rx480rad 24v laingd5s raystorm wb - a little over the top but isnt that what these extreme cpu's are for?)
  • 1Angelreloaded - Tuesday, September 3, 2013 - link

    I have to agree with you in the extreme market who gives a damn about being green, most will run 1200watt Plat mod PSUs with an added extra 450 watt in the background, and 4GPUs as this is pretty much the only reason to buy into 2011 socket in the first place 2 extra cors and 40x PCIe lanes.
  • crouton - Tuesday, September 3, 2013 - link

    I could not agree with you more! I have a OC'd i920 that just keeps chugging along and if I'm going to drop some coin on an upgrade, I want it to be an UPGRADE. Let ME decide what's reasonable for power consumption. If I burn up a 8/10 core CPU with some crazy cooling solution then it's MY fault. I accept this. This is the hobby that I've chosen and it comes with risks. This is not some elementary school "color by numbers" hobby where you can follow a simple set of instructions to get the desired result in 10 minutes. This is for the big boys. It takes weeks or more to get it right and even then, we know we can do better. Not interested in XEON either.
  • Assimilator87 - Tuesday, September 3, 2013 - link

    The 12 core models run at 2.7Ghz, which will be slightly faster than six cores at 5.125Ghz. You could also bump up the bclk to 105, which would put the CPU at 2.835Ghz.
  • Casper42 - Tuesday, September 3, 2013 - link

    2690 v2 will be 10c @ 3.0 and 130W. Effectively 30Ghz.
    2697 v2 will be 12c @ 2.7 and 130W. Effectively 32.4Ghz

    Assuming a 6 Core OC'd to 5Ghz Stable, 6c @ 5.0 and 150W? (More Power due to OC)
    effectively 30Ghz.

    So tell me again how a highly OC'd and large unavailable to the masses 6c is better than a 10/12c when you need Multiple Threads?
    Keep in mind those 10 and 12 core Server CPUs are almost entirely AIR cooled and not overclocked.

    I think they should have released an 8 and 10 core Enthusiast CPU. Hike up the price and let the market decide which one they want.
  • MrSpadge - Tuesday, September 3, 2013 - link

    6c @ 5.0 will eat more like 200+ W instead of 130/150.
  • ShieTar - Wednesday, September 4, 2013 - link

    For Sandy Bridge, we had:
    2687, 8c @ 3.1 GHz => 24.8 GHz effectively
    3970X, 6c @ 3.5 GHz => 21 GHz before overclocking, only 4.2 GHz required to exceed the Xeon.

    Fair enough, for Ivy Bridge Xeons, the 10core at 3 GHz has been announced. I'll believe that claim when I see some actual benchmarks on it. I have some serious doubts that a 10core at 3 GHz can actually use less power than an 8 core at 3.4 GHz. So lets see on what frequency those parts will actually run, under load.

    Furthermore, the effective GHz are not the whole truth, even on highly parallel tasks. While cache seems to scale with the number of cores for most Xeons, memory bandwidth does not, and there are always overheads due to the common use of the L3 cache and the memory.

    Finally, not directly towards you but to several people talking about "green": Entirely not the point. No matter how much power your cooling system can remove, you are always creating thermal gradients when generating too much heat on a very small space. Why do you guys think there was no 3.5GHz 8 core for Sandy Bridge-EP? The silicon is the same for 6-core and 8-core, the core itself could run the speed. But INTEL is not going to verify the continued operation of a chip with a TDP >150W.

    They give a little leeway when it comes to the K-class, because there the risk is with customer to a certain point. But they just won't go and sell a CPU which reliably destroys itself or the MB the very moment somebody tries to overclock it.
  • psyq321 - Thursday, September 5, 2013 - link

    I am getting 34.86 @Cinebench with dual Xeon 2697 v2 running @3 GHz (max all-core turbo).

    Good luck reaching that with superclocked 4930/4960X ;-)
  • piroroadkill - Tuesday, September 3, 2013 - link

    All I really learn from these high end CPU results is that if you actually invested in high end 1366 in the form of 980x all that time ago, you've got probably the longest lasting system in terms of good performance that I can even think of.

Log in

Don't have an account? Sign up now