Gaming Benchmarks: Low End

To satisfy our curiosity regarding high power and low power eDRAM based Xeons in gaming, we ran our regular suite through each processor. On this page are our integrated graphics results, along with a cheaper graphics solution in the R7 240 DDR3.

Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on Integrated Graphics

Alien Isolation on ASUS R7 240 DDR3 2GB ($70)

Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.

For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.

Total War: Attila on Integrated Graphics

Total War: Attila on ASUS R7 240 DDR3 2GB ($70)

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on Integrated Graphics

Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70)

GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.

GRID: Autosport on Integrated Graphics

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70)

Middle-Earth: Shadow of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadow of Mordor on Integrated Graphics

Shadow of Mordor on ASUS R7 240 DDR3 2GB ($70)

Office and Web Performance Gaming Benchmarks: GTX 770 and R9 285
Comments Locked

72 Comments

View All Comments

  • runciterassociates - Wednesday, August 26, 2015 - link

    This is a server chip. Why are you benchmarking games?
    Furthermore, for SPEC, why are you using a dGPU when this chip has on die graphics?
    Where are the OpenCL, OpenMP, GPGPU benchmarks, which are going to be the majority of how these will be used for green heterogeneous computing?
  • Gigaplex - Wednesday, August 26, 2015 - link

    The E3 Xeons are more likely to be used in a workstation than a server.
  • TallestJon96 - Wednesday, August 26, 2015 - link

    They benchmark games because ignorant gamers (like myself) love to see gaming benchmarks for everything, even if they will never be used for games! If it was a 20 core Xeon clocked at 2ghz with hyper threading, we would want the benchmarks, even though they just show that everything i5 and up performs identically. We are a strange species, and you should not waste your time trying to understand us.
  • Oxford Guy - Wednesday, August 26, 2015 - link

    No benchmarks are irrelevant when they involve products people are using today. Gaming benchmarks are practical. However, that doesn't mean charts are necessarily well-considered, such as with how this site refuses to include a 4.5 GHz FX chip (or any FX chip) and instead only includes weaker APUs.
  • Ian Cutress - Thursday, August 27, 2015 - link

    As listed in a couple of sections of the review, this is because Broadwell-H on the desktop does not have an equivalent 84W part for previous generations and this allows us, perhaps somewhat academically, so see if there ends up being a gaming difference between Broadwell and Haswell at the higher power consumption levels.
  • Jaybus - Friday, August 28, 2015 - link

    Because, as stated in the article, the Ubuntu Live CD kernel was a fail for these new processors, so they couldn't run the Linux stuff.
  • Voldenuit - Wednesday, August 26, 2015 - link

    SPECviewperf on a desktop card?

    I'd be interested to see if a Quadro or FirePro would open up the gap between the CPUs.
  • mapesdhs - Thursday, August 27, 2015 - link

    I was wondering that too; desktop cards get high numbers for Viewperf 12 because they cheat in the driver layer on image quality. SPEC testing should be done with pro cards where the relevance is more sensible. The situation is worse now because both GPU makers have fiddled with their drivers to be more relevant to consumer cards. Contrast how Viewperf 12 behaves with desktop cards to the performance spread observed with Viewperf 11, the differences are enormous.

    For example, tesing a 980 vs. a Quadro k5000 with Viewperf 11 and 12, the 980 is 3X faster than the K5000 for Viewperf 12, whereas the K5000 is 6x faster than the 980 for Viewperf 11. More than an order of magnitude performance shift just by using the newer test suite?? I have been told by tech site people elsewhere that the reason is changes to drivers and the use of much less image quality on consumer cards. Either way, it makes a nonsense of the usefulness of Viewperf if this is what's going on now. Otherwise, someone has to explain why the 980 compares so differently to a K5000 for Viewperf 11.
  • Ian Cutress - Thursday, August 27, 2015 - link

    Both points noted. I'll see what I can do to obtain the professional cards.
  • XZerg - Wednesday, August 26, 2015 - link

    The gaming charts are messed up - igp performs faster than the dgpu on the SAME settings? i think something is wrong - most likely the labels of settings.

    Also it would have been better to compare IGP performance against the older versions of IRIS - where is 4770R? the point here is that while keeping the W similar, what are we really getting out of 14nm?

Log in

Don't have an account? Sign up now