Gaming Benchmarks: Low End

To satisfy our curiosity regarding high power and low power eDRAM based Xeons in gaming, we ran our regular suite through each processor. On this page are our integrated graphics results, along with a cheaper graphics solution in the R7 240 DDR3.

Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on Integrated Graphics

Alien Isolation on ASUS R7 240 DDR3 2GB ($70)

Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.

For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.

Total War: Attila on Integrated Graphics

Total War: Attila on ASUS R7 240 DDR3 2GB ($70)

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on Integrated Graphics

Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70)

GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.

GRID: Autosport on Integrated Graphics

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70)

Middle-Earth: Shadow of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadow of Mordor on Integrated Graphics

Shadow of Mordor on ASUS R7 240 DDR3 2GB ($70)

Office and Web Performance Gaming Benchmarks: GTX 770 and R9 285
Comments Locked

72 Comments

View All Comments

  • Ian Cutress - Thursday, August 27, 2015 - link

    So to clear up your misconceptions: we (or more specifically, I) have not retested any AM3 product yet on our 2015 benchmark suite due to time restrictions and general lack of reader interest in AM3. I have 3 test beds, and our CPU/GPU tests are only partially automated, requiring 35+ working hours of active monitoring for results. (Yes, can leave some tests on overnight, but not that many). Reserving one test bed for a month a year for AM3+ limits the ability to do other things, such as motherboard tests/DRAM reviews/DX12 testing and so on.

    You'll notice our FX-9590 review occurred many, many months after it was officially 'released', due to consumer availability. And that was just over 12 months ago - I have not been in a position to retest AM3 since then. However, had AMD launched a new CPU for it, then I would have specifically made time to circle back around - for example I currently have the A8-7670K in to test, so chances are I'll rerun the FM2+ socket as much as possible in September.

    That being said, we discussed with AMD about DirectX 12 testing recently. Specifically when more (full/non-beta) titles are launched to the public, and we update our game tests (on CPU reviews) for 2016. You will most likely see the FX range of CPUs being updated in our database at that time. Between now and then, we have some overlap between the FX processors and these E3 processors in our benchmarking database. This is free for anyone to access at any time as and when we test these products. Note that there is a large price difference, a large TDP difference, but there are some minor result comparisons for you. Here's a link for the lazy:

    http://anandtech.com/bench/product/1289?vs=1538

    The FX-9590 beats the 35W v4 Xeon in CineBench, POV-Ray and Hybrid, despite being 1/3 the price but 6x the power consumption.
  • Oxford Guy - Thursday, August 27, 2015 - link

    The 9590 is a specialty product, hardly what I was focusing on which is FX overclocked to a reasonable level of power consumption. The 9590 does not fall into that category.

    You can get an 8320E for around $100 at Microcenter and pair it with a discount 970 motherboard like I did ($25 with the bundle pricing a few months ago for the UD3P 2.0) and get a decent clockspeed out of it for now much money. I got my Zalman cooler for $20 via slickdeals and then got two 140mm fans for it. The system runs comfortably at 4.5 GHz (4.4 - 4.5 are considered the standard for FX -- for the point where performance per watt is still reasonable). Those pairing it with an EVO cooler might want 4.3 GHz or so.

    The 9590 requires an expensive motherboard, expensive (or loud) case cooling, and an expensive heatsink. Running an FX at a clockspeed that is below the threshold at which the chip begins to become a power hog is generally much more advisable. And, review sites that aren't careful will run into throttling from VRMs or heat around the chip which will give a false picture of the performance. People in one forum said adamantly that the 9590 chips tend to be leaky so their power consumption is even higher than a low-leakage chip like 8370E.

    One of your reviews (Broadwell I think) had like 8 APUs in it and not a single FX. That gives people the impression that APUs are the strongest competition AMD has. Since that's not true it gives people the impression that this site is trying to manipulate readers into thinking Intel is more superior than it actually is in terms of price-performance.

    There is no doubt that FX is old and was not ideal for typical desktop workloads when it came out. Even today it only has about 1.2 billion transistors and still has 32nm power consumption. But, since games are finally beginning to use more than two cores or so, and because programs like Blender (which you probably should use in your results) can leverage those cores without exaggerating the importance of FPU (as Cinebench is said to do) it seems to still be clinging to relevance. As for lack of reader interest in FX, it's hard to gauge that when your articles don't include results from even one FX chip.

    Regardless of reader interest if you're going to include AMD at all, which you should, you should use their best-performing chip (although not the power-nuts 9590) design — not APUs — unless you're specifically targeting small form factors or integrated graphics comparisons.
  • Oxford Guy - Thursday, August 27, 2015 - link

    You also ran an article about the 8320E. Why not use that 8320E, overclocked to reasonably level like 4.5 GHz, as the basis for benchmarks you can include in reviews?
  • SuperVeloce - Thursday, August 27, 2015 - link

    Clocks are not identical (you know the meaning of that word, right?). And the 4790k was released a year after first haswells. Usually you compare models from the launch day of the said arhitecture.
  • MrSpadge - Thursday, August 27, 2015 - link

    It doesn't matter what launched on launch day of the older competition. It matters what one can buy at the current launch date instead of the new product.
  • mapesdhs - Thursday, August 27, 2015 - link

    Hear hear! Reminds me of the way reference GPUs keep being used in gfx articles, even when anyone with half a clue would buy an oc'd card either because they're cheaper, or seller sites don't sell reference cards anymore anyway.
  • Oxford Guy - Wednesday, August 26, 2015 - link

    "cue the realists"

    Corporations are a conspiracy to make profit for shareholders, CEOs, etc. The assumption of conspiracy should be a given, not a "theory". Any business that isn't constantly conspiring to deliver the least product for the most return is going to either die or stagnate.
  • boxof - Wednesday, August 26, 2015 - link

    "In a recent external podcast, David Kanter"

    Couldn't bring yourselves to mention your competition huh? Stay classy.
  • Dr.Neale - Wednesday, August 26, 2015 - link

    Your comparison of the Xeon e3-1276 v3 to the e3-1285 v4, e3-1285L v4, and e3-1265L v4 is systematically slightly biased in favor of the e3-1276 v3, because for all tests you use (non-ECC) DDR3 1866 memory, whereas with ECC memory (and a C226 chipset that supports it, as in an ASUS P9D WS motherboard), the v3 Xeon is limited to DDR3 1600, while the v4 Xeons can use DDR3 1866 memory.

    Therefore using DDR3 1866 memory with the v3 Xeon gives it a slight systematic performance boost over what it would achieve with only DDR3 1600 memory, which is the maximum speed it can use in an ECC / C226 workstation.

    With this in mind, I believe the performance of a e3-1276 v3 Xeon with DDR3 1600 memory would more closely match that of the e3-1285 v4 and e3-1285L Xeons with DDR3 1866 memory, than is indicated in the graphs here, where the v3 and v4 Xeons are all tested with the same DDR3 1866 memory only.
  • ruthan - Thursday, August 27, 2015 - link

    This power consumption mystery have to be discovered, its like Geforce 970 4 GB thing. Maybe Intel cheating with those numbers, because there are customer like me, which prefer lower power and silence are ready to pay for that.

    Most typical workstation use case, where im still missing tons of horsepower on CPU side is virtualization, especialy for gaming, yesterday released Vmware workstation 12 with DX10 support. Especially in Linux enviroment, gaming in virtual machine make a sense (i know, i know there is not DX10 suportt even through wrapper).

Log in

Don't have an account? Sign up now