Sleeping Dogs

Sleeping Dogs is a benchmarking wet dream – a highly complex benchmark that can bring the toughest setup and high resolutions down into single figures. Having an extreme SSAO setting can do that, but at the right settings Sleeping Dogs is highly playable and enjoyable. We run the basic benchmark program laid out in the Adrenaline benchmark tool, and the Xtreme (1920x1080, Maximum) performance setting, noting down the average frame rates and the minimum frame rates.

Sleeping Dogs: 1080p Max, 1x GTX 770

Sleeping Dogs, 1080p Max
  NVIDIA AMD
Average Frame Rates



Minimum Frame Rates



The lower frequency of the 12-core Xeon sometimes puts it behind in our Sleeping Dogs testing, usually in multiple GPU results such as 3x HD 7970 where it is 15 FPS behind both the i7-4960X and E5-2687W v2

Company of Heroes 2

The final gaming benchmark is another humdinger. Company of Heroes 2 also can bring a top end GPU to its knees, even at very basic benchmark settings. To get an average 30 FPS using a normal GPU is a challenge, let alone a minimum frame rate of 30 FPS. For this benchmark I use modified versions of Ryan’s batch files at 1920x1080 on Medium. COH2 is a little odd in that it does not scale with more GPUs.

Company Of Heroes 2: 1080p Max, 1x GTX 770

Company of Heroes 2, 1080p Max
  NVIDIA AMD
Average Frame Rates



Minimum Frame Rates



COH2 also acts somewhat CPU agnostic, although the higher frequency Xeon does have a small negligable boost over the E5-2697 v2.  In all circumstances, the i7-4960X is competitive.

Battlefield 4

The EA/DICE series that has taken countless hours of my life away is back for another iteration, using the Frostbite 3 engine. AMD is also piling its resources into BF4 with the new Mantle API for developers, designed to cut the time required for the CPU to dispatch commands to the graphical sub-system. For our test we use the in-game benchmarking tools and record the frame time for the first ~70 seconds of the Tashgar single player mission, which is an on-rails generation of and rendering of objects and textures. We test at 1920x1080 at Ultra settings.

Battlefield 4: 1080p Max, 1x GTX 770

Battlefield 4, 1080p Max
  NVIDIA AMD
Average Frame Rates



99th Percentile Frame Rates



As we add more GPUs, AMD and NVIDIA act differently.  With NVIDIA, more MHz gets better frame rates, whereas with AMD more cores wins out.

Conclusions

It would seem that in our gaming benchmarks, the higher frequency E5-2697W v2 is the more obvious choice over the 12-core E5-2697 v2.  However in almost all circumstances, they perform on part with or below the i7-4960X, thus suggesting that our games tested cannot take advantage of more threads.

Gaming Benchmarks: F1 2013, Bioshock Infinite, Tomb Raider Xeon E5-2697 v2 and E5-2687W v2 Conclusions
Comments Locked

71 Comments

View All Comments

  • Ian Cutress - Tuesday, March 18, 2014 - link

    I need to spend some time to organise this with my new 2014 benchmark setup. That and I've never used bench to add data before. But I will be putting some data in there for everyone :)
  • Maxal - Tuesday, March 18, 2014 - link

    There is one sad thing - disappearance of 2C/4T high clock speed CPUs, as Oracle Enterprise Edition charges by cores.....and sometimes you need just small installation but with EE features...
  • Rick83 - Tuesday, March 18, 2014 - link

    Wouldn't L3/thread be a more useful metric than L3/core in the big table?
    HT will only really work after all, if both threads are in cache, and if you can get a CPU with HT and one without, as is the case with the Xeons, you'd get the one without because you are running more concurrent threads. That means that under optimum conditions, you have 2 threads per core that are active, and thus 2x#cores threads that need to be in the data caches.
  • HalloweenJack - Tuesday, March 18, 2014 - link

    holy shit anandtech you really have gone to the dogs - comparing a £2000 cpu against a £100 apu and saying its better..... and really? wheres the AMD AM3+ cpu`s? 8350 or 9590? seriously
  • Ian Cutress - Tuesday, March 18, 2014 - link

    Let's see. I'm not comparing it against a £100 APU, I'm comparing it against the $1000 Core i7-4960X to see the difference. We're using a new set of benchmarks for 2014, which I have already run on the APU so I include them here as a point of reference for AMD's new highest performance line. It is interesting to see where the APU and Xeon line up in the benchmarks to show the difference (if any). AMD's old high end line has stagnated - I have not tested those CPUs in our new 2014 set of benchmarks. There have been no new AM3+ platforms or CPUs this year, or almost all of last year. Testing these two CPUs properly took the best part of three weeks, including all the other work such as news, motherboard reviews, Mobile World Congress coverage, meetings, extra testing, bug fixing, conversing with engineers on how to solve issues. Sure, let's just stop all that and pull out an old system to test. If I had the time I really would, but I was able to get these processors from GIGABYTE, not Intel, for a limited time. I have many other projects (memory scaling, Gaming CPU) that would take priority if I had time.

    AKA I think you missed the point of the article. If you have a magical portal to Narnia, I'd happily test until I was blue in the face and go as far back to old Athlon s939 CPUs. But the world moves faster than that.
  • deadrats - Tuesday, March 18, 2014 - link

    any chance of updating this article with some x265 and/or Divx265 benchmarks? hevc is much more processor intensive and threading friendly, so these encoders may be perfect for showing a greater separation between the various core configurations.
  • Ian Cutress - Tuesday, March 18, 2014 - link

    If you have an encoder in mind drop me an email. Click my name at the top of the article.
  • bobbozzo - Tuesday, March 18, 2014 - link

    Hi,

    1. please change the charts' headings on the first page to say 'Cores/Threads' instead of 'Cores'.

    2. it wasn't clear on the first page that this is talking about workstation CPUs.

    3. "Intel can push core counts, frequency and thus price much higher than in the consumer space"
    I would have said core counts and cache...
    Don't the consumer parts have the highest clocks (before overclocking)?

    Thanks!
  • bobbozzo - Tuesday, March 18, 2014 - link

    "it wasn't clear on the first page that this is talking about workstation CPUs."

    As opposed to servers.
  • Ian Cutress - Tuesday, March 18, 2014 - link

    1) I had it that way originally but it broke the table layout due to being too wide. I made a compromise and hoped people would follow the table in good faith.
    2) Generally Xeon in the name means anything Workstation and above. People use Xeons for a wide variety of uses - high end for workstaitons, or low end for servers, or vice versa.
    3) Individual core counts maybe, but when looking at 8c or 12c chips in the same power bracket, the frequency is still being pushed to more stringent requirements (thus lower yields/bin counts) vs. voltages. Then again, the E3-1290 does go to 4.0 GHz anyway, so in terms of absolute frequencies you can say (some) Xeons at least match the consumer parts.

Log in

Don't have an account? Sign up now