Gaming Benchmarks: Low End

Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on Integrated Graphics

Alien Isolation on ASUS R7 240 DDR3 2GB ($70)

Alien Isolation on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics

When it comes to integrated graphics, the APUs are ruling the roost. The A8-7650K sits in its stack where it should, between the A10-7800 and the A8-7600. When we use a low end GPU, all our CPUs perform similarly showing that this benchmark is more GPU limited at this level. In Dual Graphics mode, the frame rate moves up to just under double the integrated value.

Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.

For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.

Total War: Attila on Integrated Graphics

Total War: Attila on ASUS R7 240 DDR3 2GB ($70)

Total War: Attila on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics

Similarly with Attila, the AMD APUs at $100 beat an Intel IGP at $340. When we move to the R7 240 however, the Intel CPUs actually have a slight advantage, perhaps showing that Attila needs CPU performance here. Again, dual graphics mode offers almost double the frame rate, almost hitting 60 FPS.

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on Integrated Graphics Grand Theft Auto V on Integrated Graphics [Under 60 FPS]

Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70) Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70) [Under 60 FPS]

Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics [Under 60 FPS]

One of the surprises in this review, for me, was the GTA performance. Here we have a $105W APU that easily breaks through the 30 FPS barrier in our low GPU setting, almost hitting 50 FPS average. the graph on the right shows the percentage of frames under 60 FPS (or over 16.6 ms), and it's clear that at these settings more horsepower is needed. Using the R7 240 gave a slightly different story, although the $105 APU is inbetween the $72 and $122 Intel CPUs.

GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.

GRID: Autosport on Integrated Graphics GRID: Autosport on Integrated Graphics [Minimum FPS]

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70) GRID: Autosport on ASUS R7 240 DDR3 2GB ($70) [Minimum FPS]

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics GRID: Autosport on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics [Minimum FPS]

Codemaster's racing engines historically like as much as can be thrown at it, and on integrated graphics it is a clear win for AMD's parts, getting almost double the frame rate. This was perhaps expected, knowing how AMD partitions more of its die area to graphics. When we stick in the R7 240, the difference becomes negligable, and only a small rise is seen from dual graphics.

Middle-Earth: Shadows of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadows of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadows of Mordor on Integrated Graphics Shadows of Mordor on Integrated Graphics [Minimum FPS]

Shadows of Mordor on ASUS R7 240 DDR3 2GB ($70) Shadows of Mordor on ASUS R7 240 DDR3 2GB ($70) [Minimum FPS]

Shadows of Mordor on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics Shadows of Mordor on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics [Minimum FPS]

Similar to GRID, Mordor loves the integrated graphics with a clear margin from Intel to AMD in average frame rate and minimum frame rate. With the R7 240, all GPUs are more-or-less equal, although Intel has the upper hand in minimum frame rates. Dual graphics mode gives a good boost to the average frame rates, moving from 44 FPS to 64 FPS on the A8-7650K.

Professional Performance: Linux Gaming Benchmarks: GTX 770 and R9 285
Comments Locked

177 Comments

View All Comments

  • jabber - Tuesday, May 12, 2015 - link

    Exactly.

    "Yayyyy I use 7Zip all day long! "

    Said no one...ever.

    I don't even know why people still compact files? Are they still using floppies? Man, poor devils.
  • Gigaplex - Tuesday, May 12, 2015 - link

    I've been getting BSODs lately due to a bad Windows Update. The Microsoftie asked me to upload a complete memory crash dump. There's no way I can upload a 16GB dump file in a reasonable timeframe on a ~800kbps upload connection, especially when my machine BSODs every 24 hours. Compression brought that down to a much more manageable 4GB.
  • galta - Tuesday, May 12, 2015 - link

    So it makes perfect sense for yoy to stay with AMD...
  • NeatOman - Wednesday, May 13, 2015 - link

    I use it everyday :( rocking a FX-8320@4.5Ghz for the last 3 years.. I picked it up for $180 with the CPU and!! Motherboard. I was about to pick up a 3770k too, saved about $200 but am about 15-20% down on performance. And if you're worried about electrical cost, you're walking over dollars to pick up pennies.

    I do it to send pictures of work I do, and a good SSD is key :)
  • UtilityMax - Tuesday, May 12, 2015 - link

    If you look at the WinRAR benchmark, then that result strongly suggests that WinRAR is multi-threaded. I mean, two core two thread Pentium is clearly slower than the two core but four thread Core i3, and quad-core i5 is clearly faster than Core i3, and Core i7 with its eight threads is clearly faster than Core i5. Hence galta's comment that AMD FX with 8 cores is probably even faster, but he says that this is not normal usage.
  • TheJian - Thursday, May 14, 2015 - link

    There is an actual checkbox in winrar for multithreading for ages now. ROFL. 95% of usenet uses winrar, as does most of the web. That doesn't mean I don't have 7zip installed, just saying it is only installed for the once in 6 months I find a file that uses it.

    You apparently didn't even read what he said. He clearly states he's using winrar and finds FX is much faster using 8 cores of FX in winrar. You're like, wrong on all fronts. He's using winrar (can't read?), he's using FX (why suggest it? Can't read?) AND there is a freaking check-box to turn on multi-threading in the app. Not sure whether you're shilling for AMD here or 7zip, but...jeez.
  • galta - Saturday, May 16, 2015 - link

    Last AMD CPU I had was the old and venerable 386DX@40Mhz. Where any of you alive back in the early 90s?
    Ever since I've been using Intel.
    Of course there were some brief moments during this time when AMD had the upper hand, but the last it happened was some 10 years ago when Athlom and its two cores were a revolution and smashed Pentium Ds. It's just that during that particular moment I wasn't looking for an upgrade so I've Intel ever since.
    Having said that, I have to add that I don't understand why we are spending so much time discussing compression of files.
    Of course that the more cores you have the better, and AMD happens to have the least expensive 8 core processor on the market, BUT most users spend something like 0.15% of their time compressing files, making this particular shinny performance irrelevant for most of us.
    Because most of other software does not scale so good in multithreading (and for games, it has nothing to do with DX12 as someone said elsewhere), we are most likely interested in performance per core, and Intel clearly has the lead here.
  • NeatOman - Wednesday, May 13, 2015 - link

    Truth is the average user won't be able to tell the difference on a system with a i3 running on a ssd and a A6-7400k on a ssd or even a A10-7850k which would be more direct competition to the i3. I build about 2-4 new Intel and AMD systems a month and the only time I myself notice is when I'm setting them up, after that they all feel relitivly close in speed due to the SSD which was the largest bottleneck to have been overcome in the last 10 years.

    So Intel might feel snappier but are still not much faster in day to day use of heavy browsing and media consumtion as long as you have enough ram and a decent SSD.
  • mapesdhs - Tuesday, May 12, 2015 - link

    Ian Cutress wrote:
    > Being a scaling benchmark, C-Ray prefers threads and seems more designed for Intel."

    It was never specifically designed for Intel. John told me it was, "...an extremely
    small program I did one day to figure out how would the simplest raytracer program
    look like in the least amount of code lines."

    The default simple scene doesn't make use of any main RAM at all (some systems
    could hold it entirely in L1 cache). The larger test is more useful, but it's still wise to
    bare in mind to what extent the test is applicable to general performance comparisons.
    John confirmed this, saying, "This thing only measures 'floating point CPU performance'
    and nothing more, and it's good that nothing else affects the results. A real rendering
    program/scene would be still CPU-limited meaning that by far the major part of the time
    spent would be CPU time in the fpu, but it would have more overhead for disk I/O, shader
    parsing, more strain for the memory bandwidth, and various other things. So it's a good
    approximation being a renderer itself, but it's definitely not representative."

    As a benchmark though, c-ray's scalability is incredibly useful, in theory only limited by
    the no. of lines in an image, so testing a system with dozens of CPUs is easy.

    Thanks for using the correct link btw! 8)

    Ian.

    PS. Ian, which c-ray test file/image are you using, and with what settings? ie. how many
    threads? Just wondered if it's one of the stated tests on my page, or one of those defined
    by Phoronix. The Phoronix page says they use 16 threads per core, 8x AA and 1600x1200
    output, but not which test file is used (scene or sphfract; probably the latter I expect, as
    'scene's incredibly simple).
  • Ian Cutress - Tuesday, May 12, 2015 - link

    It's the c-ray hard test on Linux-Bench, using

    cat sphfract | ./c-ray-mt -t $threads -s 3840x2160 -r 8 > foo.ppm

    I guess saying it preferred Intel is a little harsh. Many programs are just written the way people understand how to code, and it ends up sheer luck if they're better on one platform by default than the other, such as with 3DPM.

Log in

Don't have an account? Sign up now