Gaming Benchmarks: Low End

To satisfy our curiosity regarding low-power gaming, as well as dual graphics arrangements, we ran our regular suite through each processor. On this page are our integrated graphics results, along with a cheaper graphics solution in the R7 240 DDR3 and, in the case of AMD, both of these together in dual graphics mode.

Alien: Isolation

If first-person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards, ranging from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom-built engine that includes dynamic sound effects and should be fully multicore enabled.

For low-end graphics, we test at 720p with Ultra settings, whereas for mid- and high-range graphics, we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.

Alien Isolation on Integrated Graphics

Alien Isolation on ASUS R7 240 DDR3 2GB ($70)

Alien Isolation on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics

Total War: Attila

The Total War franchise moves on to Attila, another Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically, the game can render hundreds/thousands of units on-screen at once, all with their individual actions, and can put some of the big cards to task.

For low-end graphics, we test at 720p with performance settings, recording the average frame rate. With mid- and high-range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled, and the in-game scripted benchmark is used.

Total War: Attila on Integrated Graphics

Total War: Attila on ASUS R7 240 DDR3 2GB ($70)

Total War: Attila on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14, 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but it opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum, it creates stunning visuals but hard work for both the CPU and the GPU.

For our test, we have scripted a version of the in-game benchmark, relying only on the final part, which combines a flight scene with an in-city drive-by, followed by a tanker explosion. For low-end systems, we test at 720p on the lowest settings, whereas mid- and high-end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 fps (16.6 ms).

Grand Theft Auto V on Integrated Graphics

Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70)

Grand Theft Auto V on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics

GRID: Autosport

No graphics test is complete without some input from Codemasters and the Ego engine, which means for this round of testing, we point toward GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making "authenticity" a main focal point for this version.

GRID’s benchmark mode is very flexible, and as a result, we created a test race using a shortened version of the Red Bull Ring with 12 cars doing two laps. The player car is in focus throughout this benchmark and starts last, but usually finishes second or third. For low-end graphics, we test at 1080p and medium settings, whereas mid- and high-end graphics get the full 1080p maximum. Both the average and the minimum frame rates are recorded.

GRID: Autosport on Integrated Graphics

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70)

GRID: Autosport on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics

For whatever reason, the A8-7670K gets a good showing in the integrated tests, especially in dual graphics mode, with an abnormally high score. Some other issue might be at play here and warrants further testing.

Middle-Earth: Shadows of Mordor

The final title in our testing is another battle of system performance with the open-world action-adventure title Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low-end graphics, we examine at 720p with low settings, whereas mid- and high-end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.

Shadow of Mordor on Integrated Graphics

Shadow of Mordor on ASUS R7 240 DDR3 2GB ($70)

Shadow of Mordor on ASUS R7 240 DDR3 2GB ($70) with Dual Graphics

Professional Performance: Windows Gaming Benchmarks: GTX 770 and R9 285
Comments Locked

154 Comments

View All Comments

  • Ian Cutress - Wednesday, November 18, 2015 - link

    It's a 95W desktop part. It's not geared for laptops or NUCs. There are 65W desktop parts with TDP Down modes to 45W, and lower than that is the AM1 platform for socketed. Carrizo at 15W/35W for soldered such as laptops and NUC-like devices.
  • Vesperan - Wednesday, November 18, 2015 - link

    Apologies if I missed it - but what speed was the memory running at for the APUs?

    The table near the start just said 'JEDEC' and linked to the G-skill/Corsair websites. This is important given these things are bandwidth constrained - the difference between 1600mhz and 2133mhz can be significant (over 20 percent).
  • tipoo - Wednesday, November 18, 2015 - link

    2133mhz, page 2
  • Ian Cutress - Wednesday, November 18, 2015 - link

    We typically run the CPUs at their maximum supported memory frequency (which is usually quoted as JEDEC specs with respect to sub-timings). So the table on the front page for AMD processors is relevant, and our previous reviews on Intel parts (usually DDR3-1600 C11 or DDR4-2133 C15) will state those.

    A number of people disagree with this approach ('but it runs at 2666!' or 'no-one runs JEDEC!'). For most enthuiasts, that may be true. But next time you're at a BYOC LAN, go see how many people are buying high speed memory but not implementing XMP. You may be suprised - people just putting parts together and assuming they just work.

    Also, consider that the CPU manufacturers would put the maximum supported frequency up if they felt that it should be validated at that speed. It's a question of silicon, yields, and DRAM markets. Companies like Kingston and Micron still sell masses of DDR3-1600. Other customers just care about the density of the memory, not the speed. It's an odd system, and by using max-at-JEDEC it keeps it fair between Intel, AMD or others: if a manufacturer wants a better result, they should release a part with a higher supported frequency.

    I don't think we've done a DRAM scaling review on Kaveri or Kaveri Refresh, which is perhaps an oversight on my part. Our initial samples had issues with high speed memory - maybe I should put this one from 1600 up to 2666 if it will do it.
  • Oxford Guy - Wednesday, November 18, 2015 - link

    SInce you always overclock processor is makes little sense to hold back an APU with slow RAM.
  • Oxford Guy - Wednesday, November 18, 2015 - link

    It's not just the bandwidth, either (like 2666) but the combination of that and latency. My FX runs faster in Aida benches, for the most part, at CAS 9-11-10-1T 2133 (DDR3) than at 2400, probably due to limitations of the board (which is rated for 20000. Don't just focus on high clocks.
  • Oxford Guy - Wednesday, November 18, 2015 - link

    rated for 2000
  • Ian Cutress - Thursday, November 19, 2015 - link

    Off the bat, that's a false equivalence - we only overclocked in this review to see how far it would go, not for the general benchmark set.

    But to reiterate a variation on what I've already said to you before:

    For DDR3, if I was to run AMD at 2666 and Intel at 1600, people would complain. If I was to run both at DDR3-2133, AMD users would complain because I'm comparing overclocked DRAM perf to stock perf.

    Most users/SIs don't overclock - that's the reality.

    If AMD or Intel wanted better performance, they'd rate the DRAM controller for higher and offer multiple SKUs.
    They do it with CPUs all the time through binning and what you can actually buy.
    e.g. 6700k and 6600k - they don't sell a 6600k at 2133 and 6600k at 2400 for example.

    This is why we test out of the box for our main benchmark results.
    If they did do separate SKUs with different memory controller specifications, we would test update the dataset accordingly with both sets, or the most popular/important set at any rate.

    Besides, anyone following CPU reviews at AT will know your opinion on the matter, you've made that abundantly clear in other reviews. We clearly disagree. But if you want to run the AIDA synthetics on your overclocked system, great - it totally translates into noticeable real-world performance gains for sure.
  • Vesperan - Thursday, November 19, 2015 - link

    Thanks Ian - I missed than when quickly going through the story this morning prior to work. Yet somehow picked out the JEDEC bit!

    I like the approach you've outlined, it makes sense to me. So - for what it's worth, you have support of at least one irrelevant person on the internet!

    From what I saw from a few websites (Phoronix springs to mind) the gains from memory scaling decline rapidly after 2133mhz.
  • CaedenV - Wednesday, November 18, 2015 - link

    I just don't understand the argument for buying AMD these days. Computers are not things you replace every 3-5 years anymore. In the post Core2 world systems last at least a good 7-10 years of usefulness, where simple updates of SSDs and GPUs can keep systems up to date and 'good enough' for all but the most pressing workloads. People need to stop sweating about how much the up-front cost of a system is, and start looking at what tier it performs at, and finding a way to get their budget to stretch to that level.

    I don't mean starting with a $500 build and stretching your wallet (or worse, your credit card) to purchase a $1200 system. I'm not some elitist rich guy; I understand the need to stick to a budget. But the difference between AMD and Intel in price is not very much, while the Intel chip is going to run cooler, quieter, and faster. Spending the extra $50 for the Intel chip and compatible motherboard is not going to break the bank.

    Because lets face it; pretty much everyone is going to fall in one of 2 camps.
    1) you are not going to game much at all, and the integrated Intel graphics, while not stellar, are going to be 'good enough' to run solitaire, phone game ports, 4K video, and a few other things. In this case the system price is going to be essentially the same, the video performance is going to be more than adequate, and the i3 is going to knock the socks off of the A8 5+ years down the road.
    2) You actually do play 'real' games on a regular basis, and the integrated A8 graphics are going to be a bonus to you for the first 2-6 months while you save up for a dGPU anyways... in which case the video performance is going to be nearly identical between the i3 and A8, while the i3 is going to be much more responsive in your day-to-day browsing, work, and media consumption. Or, you are going to find that you outgrow what an i3 or A8 can do, and you end up building a much faster i5 or i7 based system... in which case the i3 will either retain it's resale value better, or will make a much better foundation for a home server, non-gaming HTPC, or some other use.

    I really want to love AMD, but after cost of ownership and longevity of the system is taken into consideration, they just do not make sense to purchase even in the budget category. The only place where AMD makes sense is if you absolutely have to have the GPU horsepower, but cannot have a dGPU in the system for some reason. And even in that case, the bump up to an A10 is going to be well worth the extra few $$. There is almost no use in getting anything slower than an A10 on the AMD side.

    But then again, AMD is working hard these days to reinvent themselves. Maybe 2 years from now this will all turn around and AMD will have more worthwhile products on the market that are useful for something.

Log in

Don't have an account? Sign up now