Sleeping Dogs

Sleeping Dogs is a benchmarking wet dream – a highly complex benchmark that can bring the toughest setup and high resolutions down into single figures. Having an extreme SSAO setting can do that, but at the right settings Sleeping Dogs is highly playable and enjoyable. We run the basic benchmark program laid out in the Adrenaline benchmark tool, and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Sleeping Dogs: 1080p Max, 1x GTX 770

Sleeping Dogs, 1080p Max
  NVIDIA AMD
Average Frame Rates



Minimum Frame Rates



The lower frequency of the 12-core Xeon sometimes puts it behind in our Sleeping Dogs testing, usually in multiple GPU results such as 3x HD 7970 where it is 15 FPS behind both the i7-4960X and E5-2687W v2

Company of Heroes 2

The final gaming benchmark is another humdinger. Company of Heroes 2 also can bring a top end GPU to its knees, even at very basic benchmark settings. To get an average 30 FPS using a normal GPU is a challenge, let alone a minimum frame rate of 30 FPS. For this benchmark I use modified versions of Ryan’s batch files at 1920x1080 on Medium. COH2 is a little odd in that it does not scale with more GPUs.

Company Of Heroes 2: 1080p Max, 1x GTX 770

Company of Heroes 2, 1080p Max
  NVIDIA AMD
Average Frame Rates



Minimum Frame Rates



COH2 also acts somewhat CPU agnostic, although the higher frequency Xeon does have a small negligable boost over the E5-2697 v2.  In all circumstances, the i7-4960X is competitive.

Battlefield 4

The EA/DICE series that has taken countless hours of my life away is back for another iteration, using the Frostbite 3 engine. AMD is also piling its resources into BF4 with the new Mantle API for developers, designed to cut the time required for the CPU to dispatch commands to the graphical sub-system. For our test we use the in-game benchmarking tools and record the frame time for the first ~70 seconds of the Tashgar single player mission, which is an on-rails generation of and rendering of objects and textures. We test at 1920x1080 at Ultra settings.

Battlefield 4: 1080p Max, 1x GTX 770

Battlefield 4, 1080p Max
  NVIDIA AMD
Average Frame Rates



99th Percentile Frame Rates



As we add more GPUs, AMD and NVIDIA act differently.  With NVIDIA, more MHz gets better frame rates, whereas with AMD more cores wins out.

Conclusions

It would seem that in our gaming benchmarks, the higher frequency E5-2697W v2 is the more obvious choice over the 12-core E5-2697 v2.  However in almost all circumstances, they perform on part with or below the i7-4960X, thus suggesting that our games tested cannot take advantage of more threads.

Gaming Benchmarks: F1 2013, Bioshock Infinite, Tomb Raider Xeon E5-2697 v2 and E5-2687W v2 Conclusions
Comments Locked

71 Comments

View All Comments

  • XZerg - Monday, March 17, 2014 - link

    this bench also shows that the haswell had almost no CPU related performance benefits over IVB (if not slowed down performance) looking at 3770k vs 4770k and that haswell ups the gpu performance only.

    i really question intel's skuing of haswell...
  • Nintendo Maniac 64 - Monday, March 17, 2014 - link

    Emulation?
  • BMNify - Monday, March 17, 2014 - link

    its a shame they didn't do a UHD x264 encode here as that would have shown a haswell AVX2 improvement (something like 90% over AVX), and why people will have to wait for the xeons to catch up to at least AVX2 if not AVX3.1
  • psyq321 - Wednesday, March 19, 2014 - link

    There is no "90% speedup over AVX" between HSW and IVB architectures.

    AVX (v1) is floating point only and thus was useless for x264. For floating point workloads you would be very lucky to get 10% improvement by jumping to AVX2. The only difference between AVX and AVX2 for floating point is the FMA instruction and gather, but gather is done in microcode for Haswell, so it is not actually much faster than manually gathering data.

    Now, x264 AVX2 is a big improvement because it is an integer workload, and with AVX (v1) you could not do that. So x264 is jumping from SSE4.x to AVX2, which is a huge jump and it allows much more efficient processing.

    For integer workloads that can be optimized so that you load and process eight 32-bit values at once, AVX2 Xeon EPs/EXs will be a big thing. Unfortunately, this is not so easy to do for a general-purpose algorithms. x264 team did the great job, but I doubt you will be using 14 core single Haswell EP (or 28 core dual CPU) for H.264 transcoding. This job can be done probably much more efficient with dedicated accelerators.

    As for the scientific applications, they already benefit from AVX v1 for floating point workloads. AVX2 in Haswell is just a stop-gap as the gather is microcoded, but getting code ready for hardware gather in the future uArch is definitely a good way to go.

    Finally, when Skylake arrives with AVX 3.1, this will be the next big jump after AVX (v1) for scientific / floating point use cases.
  • Kevin G - Monday, March 17, 2014 - link

    Shouldn't both the Xeon E5-2687W v2 support 384 GB of memory? 4 channels * 3 slots per channel * 32 GB DIMM per slot? (Presumably it could be twice that using eight rank 64 GB DIMMs but I'm not sure if Intel has validated them on the 6 and 10 core dies.) Registered memory has to be used for the E6-2687w v2 to get to 256 GB, just is the chip not capable of running a third slots per channel? Seems like a weird handicap. I can only imagine this being more of a design guideline rule than anything explicit. The 150W CPU's are workstation focused which tend to only have 8 slots maximum.

    Also a bit weird is the inclusion of the E5-2400 series on the first page's table. While they use the same die, they use a different socket (LGA 1356) with triple memory support and only 24 PCI-e lanes. With the smaller physical area and generally lower TDP's, they're aimed squarely the blade server market. Socket LGA 2011 is far more popular in the workstation and 1U and up servers.
  • jchernia - Monday, March 17, 2014 - link

    A 12 core chip is a server chip - the workstation/PC benchmarks are interesting, but the really interesting benchmarks would be on the server side.
  • Ian Cutress - Monday, March 17, 2014 - link

    Johan covered the server side in his article - I link to it many times in the review:
    http://www.anandtech.com/show/7285/intel-xeon-e5-2...
  • BMNify - Monday, March 17, 2014 - link

    a mass of other's might argue a 12 core/24 thread chip or better is a potential "real-time" UHD x264 encoding machine , its just out of most encoders budgets, so NO SALE....
  • Nintendo Maniac 64 - Monday, March 17, 2014 - link

    Uh, where's the test set up for the 7850K?
  • Nintendo Maniac 64 - Monday, March 17, 2014 - link

    Also I believe I found a typo:

    "Haswell provided a significant post to emulator performance"

    Shouldn't this say 'boost' rather than 'post'?

Log in

Don't have an account? Sign up now