Gaming Benchmarks: Mid-Range

Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on MSI R9 285 Gaming 2GB ($240)

Alien Isolation on MSI GTX 770 Lightning 2GB ($245)

For mid range cards, Alien Isolation has a direct split for Intel and AMD, but the difference is a few FPS at best. It would seem that cores matter not here.

Total War: Attila

The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.

For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.

Total War: Attila on MSI R9 285 Gaming 2GB ($240)

Total War: Attila on MSI GTX 770 Lightning 2GB ($245)

Neither combination here pulls Attila into a reasonable gaming rate, although there are bigger differences using the R9 285 GPU.

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).

Grand Theft Auto V on MSI R9 285 Gaming 2GB ($240)  Grand Theft Auto V on MSI R9 285 Gaming 2GB ($240) [Under 60 FPS]

Grand Theft Auto V on MSI GTX 770 Lightning 2GB ($245) Grand Theft Auto V on MSI GTX 770 Lightning 2GB ($245) [Under 60 FPS]

On GTA, the G3258, the i3-4130T and the A8-7650K perform similarly, within a few frames of each other. Though in both circumstances the $200+ CPUs give the peak performance, up to 20% more than the $100 set.

GRID: Autosport

No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.

GRID: Autosport on MSI R9 285 Gaming 2GB ($240) GRID: Autosport on MSI R9 285 Gaming 2GB ($240) [Minimum FPS]

GRID: Autosport on MSI GTX 770 Lightning 2GB ($245) GRID: Autosport on MSI GTX 770 Lightning 2GB ($245) [Minimum FPS]

GRID on an R9 285 seems to love Intel and loves more cores, as shown by the jump from the i3 to the i5. Whereas on a GTX 770, both teams perform similarly, well north of 60 FPS, although the difference lies more in the minimum frame rates.

Middle-Earth: Shadows of Mordor

The final title in our testing is another battle of system performance with the open world action-adventure title, Shadows of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadows of Mordor on MSI R9 285 Gaming 2GB ($240) Shadows of Mordor on MSI R9 285 Gaming 2GB ($240) [Minimum FPS]

Shadows of Mordor on MSI GTX 770 Lightning 2GB ($245) Shadows of Mordor on MSI GTX 770 Lightning 2GB ($245) [Minimum FPS]

Interestingly the APUs so well at 1080p SoM, especially in average frame rates. Unfortunately this does not translate well in minimum frame rates.

Middle-Earth: Shadows of Mordor at 4K

Shadows of Mordor on MSI R9 285 Gaming 2GB ($240) Shadows of Mordor on MSI R9 285 Gaming 2GB ($240) [Minimum FPS]

Shadows of Mordor on MSI GTX 770 Lightning 2GB ($245) Shadows of Mordor on MSI GTX 770 Lightning 2GB ($245) [Minimum FPS]

At 4K, the older 6000 series APUs seem to be a little behind. With the R9 285 there is also a staggered affect in average FPS performance, although a clear definition in minimum frame rates.

Gaming Benchmarks: Integrated, R7 240 DDR3 and Dual Graphics Gaming Benchmarks: GTX 980 and R9 290X
Comments Locked

177 Comments

View All Comments

  • Gigaplex - Tuesday, May 12, 2015 - link

    Mantle for AMD discrete GPUs runs on Intel CPUs so is a completely valid test for CPU gaming performance.
  • CPUGPUGURU - Tuesday, May 12, 2015 - link

    Mantle is developed as AMD GCN API so don't go telling us its optimized for Intel or Nvidia because its NOT! Mantle is DOA, dead and buried, stop pumping a Zombie API.
  • silverblue - Wednesday, May 13, 2015 - link

    You've misread Gigaplex's comment, which was stating that you can run an AMD dGPU on any CPU and still use Mantle. It wasn't about using Mantle on Intel iGPUs or NVIDIA dGPUs, because we know that functionality was never enabled.

    Mantle isn't "dead and buried"; sure, it may not appear in many more games, but considering it's at the very core of Vulkan... though that could be just splitting hairs.
  • TheJian - Friday, May 15, 2015 - link

    Incorrect. The core of Mantle sales pitches was HLSL. You only think Mantle is Vulkan because you read Mantle/Vulkan articles on Anandtech...LOL. Read PCPER's take on it, and understand how VASTLY different Vulkan (Headed by Nvidia's Neil Trevett, who also came up with OpenGL ES BTW) is from Mantle. At best AMD ends up equal here, and worst Nvidia has an inside track always with the president of Khronus being the head of Nvidia's mobile team too. That's pretty much like Bapco being written by Intel software engineers and living on Intel Land across the street from Intel itself...ROFL. See Van Smith Articles on Bapco/sysmark etc and why tomshardware SHAMEFULLY dismissed him and removed his name from his articles ages ago

    Anandtech seems to follow this same path of favoritism for AMD these days since 660ti article - having AMD portal etc no Nvidia portal - mantle lovefest articles etc, same reason I left toms years ago circa 2001 or so. It's not the same team at tomshardware now, but the damage done then is still in many minds today (and shown at times in forum posts etc). Anandtech would be wise to change course, but Anand isn't running things now, and doesn't even own them today. I'd guess stock investors in the company that bought anandtech probably hold massive shares in sinking AMD ;) But that's just a guess.

    http://www.pcper.com/reviews/General-Tech/GDC-15-W...
    Real scoop on Vulkan. A few bits of code don't make Vulkan Mantle...LOL. If it was based on HLSL completely you might be able to have a valid argument but that is far from the case here. It MIGHT be splitting hairs if this was IN, but it's NOT.

    http://www.pcper.com/category/tags/glnext
    The articles on glNext.:
    "Vulkan is obviously different than Mantle in significant ways now, such as its use of SPIR-V for its shading language (rather than HLSL)."
    CORE? LOL. Core of Vulkan would be HLSL and not all the major changes due to the GROUP effort now.

    Trevett:
    "Being able to start with the Mantle design definitely helped us get rolling quickly – but there has been a lot of design iteration, not the least making sure that Vulkan can run across many different GPU architectures. Vulkan is definitely a working group design now."

    Everything that was AMD specific is basically gone as is the case with DX12 (mantle ideas, but not direct usage). Hence NV showing victories in AMD's own mantle showcase now (starswarm)...ROFL. How bad is that? Worse NV was chosen for DX12 Forza Demo which is an AMD console game. Why didn't MS chose AMD?

    They should have spent the time they wasted on Mantle making DX12/Vulkan driver advances, not to mention DX11 driver improvements which affect everything on the market now and probably for a while into the future (until win10 takes over at least if ever if vulkan is on billions of everything else first), rather than a few mantle games. Nvidia addressed the entire market with their R&D while AMD wasted it on Mantle, consoles & apu. The downfall of AMD started with a really bad ATI price and has been killing them since then.
  • TheJian - Friday, May 15, 2015 - link

    Mantle is almost useless for FAST cpus and is dead now (wasted R&D). It was meant to help AMD weak cpus which only needed to happen because they let guys like Dirk Meyer (who in 2011 said it was a mistake to spend on anything but CORE cpu/gpu, NOT APU), & Keller go ages ago. Adding Papermaster might make up for missing Meyer though. IF they would NOT have made these mistakes, we wouldn't even have needed Mantle because they'd still be in the cpu race with much higher IPC as we see with ZEN. You have no pricing power in APU as it feeds poor people and is being crushed by ARM coming up and Intel going down to stop them. GAMERS (and power users) will PAY a premium for stuff like Intel and Nvidia & AMD ignored engineers who tried to explain this to management. It is sad they're now hiring them back to create again what they never should have left to begin with. The last time they made money for the year was Athlon's and high IPC. Going into consoles instead of spending on CORE products was a mistake too. Which is why Nvidia said they ignored it. We see they were 100% correct as consoles have made amd nothing and lost the CPU & GPU race while dropping R&D on both screwing the future too. The years spent on this crap caused AMD's current problems for 3yrs on cpu/gpu having zero pricing power, selling off fabs, land, laying off 1/3 of employees etc. You can't make a profit on low margin junk without having massive share. Now if AMD had negotiated 20%+ margins from the get-go on consoles, maybe they'd have made money over the long haul. But as it stands now they may not even recover R&D and time wasted as mobile kills consoles at 1/2 through their life with die shrinks+revving yearly, far cheaper games and massive numbers sold yearly that is drawing devs away from consoles.

    Even now with 300's coming (and only top few cards are NOT rebadges which will just confuse users and piss them off probably), Nvidia just releases a faster rehash of tech waiting to answer and again keep a great product down in pricing. AMD will make nothing from 300's. IF they had ignored consoles/apus they would have ZEN out already (2yrs ago? maybe 3?) and 300's would have been made on 28nm optimized possibly like maxwell squeezed out more perf on the same process 6 months ago. Instead NV has had nearly a year to just pile up profits on an old process and have an answer waiting in the wings (980ti) to make sure AMD's new gpu has no pricing power.

    Going HBM when it isn't bandwidth starved is another snafu that will keep costs higher, especially with low yields on that and the new process. But again because of lack of R&D (after blowing it on consoles/apu), they needed HBM to help drop the wattage instead of having a great 28nm low watt alternative like maxwell that can still milk a very cheap old DDR5 product which has more than enough bandwidth as speeds keep increasing. HBM is needed at some point, just not today for a company needing pofits that has no cash to burn on low yields etc. They keep making mistakes and then having to make bad decisions to make up for them that stifle much needed profits. They also need to follow Nvidia in splitting fp32 from fp64 as that will further cement NV gpus if they don't. When you are a professional at both things instead of a jack of all trades loser in both, you win in perf and can price accordingly while keeping die size appropriate for both.

    Intel hopefully will be forced back to this due to ZEN also on the cpu side. Zen will cause Intel to have to respond because they won't be able to shrink their way to keeping the gpu (not with fabs catching Intel fabs) and beat AMD with a die fully dedicated to CPU and IPC. Thank god too, I've been saying AMD needed to do this for ages and without doing it would never put out another athlon that would win for 2-3yrs. I'm not even sure Zen can do this but at least it's a step in the right direction for profits. Fortunately for AMD an opening has been created by Intel massively chasing ARM and ignoring cpu enthusiasts and desktop pros. We have been getting crap on cpu side since AMD exited, while Intel just piled on gpu side which again hurt any shot of AMD making profits here...LOL. They don't seem to understand they make moves that screw themselves longer term. Short term thinking kills you.
  • ToTTenTranz - Wednesday, May 13, 2015 - link

    Yes, and the APU being reviewed, the A8-7650K also happens to be "AMD ONLY", so why not test mantle? There's a reasonable number of high-profile games that support it:

    - Battlefield 4 and Hardline
    - Dragon Age: Inquisition
    - Civilization: Beyond Earth
    - Sniper Elite III

    Plus another bunch coming up, like Star Wars Battlefront and Mirror's Edge.

    So why would it hurt so much to show at least one of these games running Mantle with a low-specced CPU like this?

    What is anandtech so afraid to show, by refusing to test Mantle comparisons with anything other than >$400 CPUs?
  • V900 - Thursday, May 14, 2015 - link

    There isn't anyth to be scared off, but Mantle is only available on a handful of games, and beyond those it's dead and buried.

    Anandtech doesn't run Mantle benchmarks for the same reason they don't review AGP graphics cards: It's a dead technology aside from the few people who currently use it...
  • chizow - Tuesday, May 12, 2015 - link

    I seriously considered an A10-7850K Kaveri build last year around this time for a small power-efficient HTPC to stream DVR'd shows from my NAS, but in the end a number of issues steered me away:

    1) Need for chassis, PSU, cooler.
    2) Lack of good mini-ITX options at launch.
    3) Not good enough graphics for gaming (not a primary consideration anyways, but something fast enough might've changed my usage patterns and expectations).

    Sadly, this was the closest I've gotten to buying an AMD CPU product in a long, long time but ultimately I went with an Intel NUC that was cheaper to build, smaller form factor, and much less power usage. And all I gave up was GPU performance that wasn't realistically good enough to change my usage patterns or expectations anyways.

    This is the problem AMD's APUs face in the marketplace today though. That's why I think AMD made a big mistake in betting their future on Fusion, people just aren't willing to trade fast efficient or top-of-the-line CPUs for a mediocre CPU/GPU combo.

    Today, there's even bigger challenges out there for AMD. You have Alienware that offers the Alpha with an i3 and GTX 860+M that absolutely destroys these APUs in every metric for $500, $400 on sale, and it takes care of everything from chassis, PSU, cooling, even Windows licensing. That's what AMD is facing now though in the low-end PC market, and I just can't see them competing with that kind of performance and value.
  • silverblue - Tuesday, May 12, 2015 - link

    I would have opted for the A8-7600 instead of the 7850K, though I do admit it was very difficult to source back then. 65W mode doesn't perform much faster than 45W mode. I suppose it's all about what you want from a machine in the end, and AMD don't make a faster CPU with weaker iGPU which might make more sense.

    The one thing stopping AMD from releasing a far superior product, in my eyes, was the requirement to at least try to extract as much performance from a flawed architecture so they could say it wasn't a complete waste of time.
  • galta - Tuesday, May 12, 2015 - link

    +1
    Fusion was not only poor strategy, it was poor implementation.
    Leaving aside the discussion of the merits integrated GPU, if AMD had done it right we would have seen Apple adopting their processor on their Macbook series, given their obsession with slim hardware, with no discrete graphics.
    Have we seen that? No.
    You see, even though Intel has never said that integrated GPU was the future, the single most important customer on that market segment was claimed by them.

Log in

Don't have an account? Sign up now