Gaming Benchmarks: High End

To satisfy our curiosity regarding high power and low power eDRAM based Xeons in gaming, we ran our regular suite through each processor. On this page are our results with the top models at their respective release dates – the GTX 980 and R9 290X. To answer some questions regarding our use of GTX 980s rather than GTX 980 Tis, the simple answer is that for long term platform testing, we need a consistent graphics setup which changes every couple of years. This is coupled with the difficulty of sourcing several cards at once from our contacts that have available budget to do so. At the time of this current cycle, the GTX 980 was NVIDIA’s top model and ASUS stepped up to the plate with a set of 980 Strix cards. Similarly, AMD provided directly two of MSI’s R9 290X 4GB models. When it comes time to update the cycle (and/or games), we try and test the new graphics on as many CPUs as possible. But this does take a substantial amount of time to set up each platform (X99, Z170, Z97, Z77, X79, X58, FM2+, AM3) and run the gauntlet of i7/i5/i3/FX/A10/A8 processors on each. That’s not to say it’s not fun, but it is a comparatively large time investment hence the perceived long generational delay (in terms of graphics) between GPU-on-CPU updates.

Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on MSI R9 290X Gaming LE 4GB ($380)

Alien Isolation on ASUS GTX 980 Strix 4GB ($560)

Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.

For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.

Total War: Attila on MSI R9 290X Gaming LE 4GB ($380)

Total War: Attila on ASUS GTX 980 Strix 4GB ($560)

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on MSI R9 290X Gaming LE 4GB ($380)

Grand Theft Auto V on ASUS GTX 980 Strix 4GB ($560)

GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.

GRID: Autosport on MSI R9 290X Gaming LE 4GB ($380)

GRID: Autosport on ASUS GTX 980 Strix 4GB ($560)

Middle-Earth: Shadow of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadow of Mordor on MSI R9 290X Gaming LE 4GB ($380)

Shadow of Mordor on MSI R9 290X Gaming LE 4GB ($380)

Shadow of Mordor on 2x MSI R9 290X Gaming LE 4GB ($380)

Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)

Shadow of Mordor on ASUS GTX 980 Strix 4GB ($560)

Shadow of Mordor on 2x ASUS GTX 980 Strix 4GB ($560)

 

Gaming Benchmarks: GTX 770 and R9 285 Intel Broadwell Xeon E3 v4 Conclusion
Comments Locked

72 Comments

View All Comments

  • piasabird - Wednesday, August 26, 2015 - link

    At $446 this isn't exactly an entry level CPU. I wonder where are the desktop CPU's with IRIS graphics like the i5-5575r which is suppose to be priced at $244 and available now but is not for sale anywhere?
  • piasabird - Wednesday, August 26, 2015 - link

    I guess Intel makes this processor but would rather have you buy a more expensive one. What is up with this? Same thing goes for i5-5675c
  • dgingeri - Wednesday, August 26, 2015 - link

    It's not meant to be a cheap CPU. It's a workstation/server chip. I has some additional data integrity features that the normal desktop CPUs can't use, like ECC memory. The drivers for the GPU are also optimized and tested for workstation level software, which is expensive to do. Sometimes, just frequency isn't enough.
  • Camikazi - Friday, August 28, 2015 - link

    I always wonder how people don't see that a server part is going to be more expensive than a desktop part. They always have been and always will be because they are binned higher and have additional features that desktops don't or can't use. Saying that $446 for this CPU is actually rather cheap for an entry level Xeon CPU and is not a bad price.
  • Free008 - Tuesday, September 1, 2015 - link

    Thats right it's too expensive. Intel will continue to gouge consumers with lower quality binned parts and disabled server features until Apple starts making decent desktop CPUs and then we can forever leave Intel and Microsoft at our leisure. Thats why none of the mobile Intel CPUs are selling - most the suppliers dont want to go back to the old monopoly days regardless of performance (which isn't incrementally significantly anymore anyway, just power savings). Intel thinks suppliers and consumers will put up with this forever but they are so wrong. It's just a matter of time now.
  • zoxo - Wednesday, August 26, 2015 - link

    It is rather disappointing that it seems that energy efficiency has regressed since the awesome 4790K. I was hoping that switching to 14nm would allow intel to do what the 88W 4790K could do in the 65W power envelope, but neither broadwell or skylake seems to be able to deliver that promise.
  • mmrezaie - Wednesday, August 26, 2015 - link

    I am also wondering why even though the performance is not changing that much but why power usage is not getting that much better!
  • milkod2001 - Wednesday, August 26, 2015 - link

    While CPU performance of the chip is only a little bit better, its GPU part is much bigger and performs much better, hence power consumption is the same as older chips. It's actually an achievement.

    For regular desktop CPUs i'd prefer Intel to give us native mainstream 6 core with no GPU at all.
    But that would not played nicely with premium E series CPU. Money, money, money. Give me more :)
  • zoxo - Wednesday, August 26, 2015 - link

    If you consider pure CPU loads, broadwell/skylake doesn't seem to show much power advantage over devil's canyon when you are getting to the 4GHz range. Skylake seems to be more overclock friendly, but it does consume a lot of power doing it.
  • azazel1024 - Friday, August 28, 2015 - link

    I was thinking the same thing based on Anandtech's original tests, but if you look at their notes under the delta power consumption and looking at a few other review sites, it looks a lot like motherboard manufacturers are all over the board with voltage/frequency curves for Skylake (and I assume here with Broadwell too) and it is biting them in the butt on power consumption. You've got a difference of easily 35% in power consumption from one board to the next using the same chip.

    Using the better numbers I have seen in some tests, Skylake, specifically the 6700k is actually significantly better than any other generation in performance per watt. Looking at the higher numbers in a few reviews, it is much worse than Broadwell and Haswell and only fractionally better than Ivy Bridge. I suspect that Skylake and probably Broadwell, that Intel's 14nm process has poor voltage/frequency scaling. Also that most motherboard manufactures are choosing poor voltage curves for the chip in an attempt to be extremely conservative.

    A knock on effect here is, it is likely to be impacting actual performance too. If the 6700k has a TDP of 94w and the Dp is 110w...I'd half imagine that there is some throttling going on there with some loads.

Log in

Don't have an account? Sign up now