AMD A8-7650K Conclusion

I've mentioned the story before, but last summer I built a system for my cousin-in-law out of spare parts. His old system, ancient and slow even by the standards when they were made, was still used for basic online browsing and school work. He had no budget, and I cobbled together an MSI motherboard, some DDR3, a mid-range Trinity APU (A8-5500), an AMD GPU and an SSD for him. Understandably he can now play CS:Go, DOTA2, Watch_Dogs and the like at semi reasonable settings in dual graphics mode, as well as watch videos without the processor grinding to a halt. He even plays GTA V at normal settings at his native resolution of 1440x900. The total system budget, if purchased new, would have been around the $300 mark, or console territory. We reused the case and power supply, and he bought a new storage drive, but for his use case it was a night and day change. Building the equivalent system on an Intel backbone would have been a stretch or it would have ended up substituting gaming performance (my cousin-in-law's priority) for other features he didn't care for.

AMD will advertise that they don't just cater to this line of updates, and that the APU line offers more than just an upgrade for entry level gamers. In the majority of our discrete gaming scenarios, this is also true. While the APUs aren't necessarily ahead in terms of absolute performance, and in some situations they are behind, but with the right combination of hardware the APU route can offer equivalent performance at a cheaper rate. This is ultimately why APUs were recommended in our two last big gaming CPU overviews for single GPU gaming, and for integrated gaming. In our new test, it was really interesting to see where the lines are drawn with different CPU and GPU combinations, both integrated and discrete from $70 to $560. One take home test result is our Grand Theft Auto benchmark nearing 60 FPS at 720p Low settings.

Grand Theft Auto V on Integrated Graphics

Grand Theft Auto V on Integrated Graphics [Under 60 FPS]

I confess that I do not game as much as I used to. Before AnandTech I played a couple of games in clan tournaments, and through thick and thin I did well enough on public servers for Battlefield 2142 and BC2, but clan matches were almost always duds. However, with the right hardware or the right software, I get one AAA title a year and usually do the full single player with a bit of multiplayer. That game for 2015 is Grand Theft Auto V, which I was able to benchmark for this review. On its own, an APU can handle 720p at low settings with a reasonable frame rate, meaning that when the drivers are in place, An APU in dual graphics mode running at 60 FPS with decent quality shouldn't be too hard to achieve. For 2015 and 2016, that percentage of frames over 60 FPS metric for GTA should be a holy grail for integrated graphics.

We've actually got a couple more APUs in to test in the form of the A10-7700K and the A6-7400K, which are slightly older APUs but fill in the Kaveri data points we are missing. Stay tuned for that capsule review. Rumor also has it that there will be updates to the Kaveri line soon, although we haven’t had any official details as of yet.

Gaming Benchmarks: GTX 980 and R9 290X
Comments Locked

177 Comments

View All Comments

  • Gigaplex - Tuesday, May 12, 2015 - link

    What happened to the DX12 benchmarks? Do we need to remind you that DX12 hasn't even been released yet, so is completely unsuitable for comparing hardware?
  • akamateau - Tuesday, May 12, 2015 - link

    Porting a CURRENT game designed and CODED to DX11 MAX SPEC to DX12 does not mean that it will automatically look better or play better if you do not consider faster fps as the main criteria for quality game play. In fact DX11 Game benchmarks will not show ANY increase in performance using Mantle or DX12
    And logically, continuing to write to this DX11 MAXSPEC will NOT improve gaming community-wide in general. Let’s be clear, a higher spec game will cost more money. So the studio must balance cost and projected sales. So I would expect that incremental increases in game quality may occur over the next few years as studios become more confident with spending more of the gaming budget on a higher MINSPEC DX12 game. Hey, it is ALL ABOUT THE MONEY.
    If a game was written with the limitations or, better, say the maximums or MAXSPEC of DX11 then that game will in all likelihood not look any better with DX12. You will run it at faster frame rates but if the polygons, texture details and AI objects aren't there then the game will only be as detailed as the original programming intent will allow.
    However, what DX12 will give you is a game that is highly playable with much less expensive hardware.
    For instance using 3dMark API Overhead test, it is revealed with DX11 Intel i7-4960 with a GTX 980 can produce 2,000,000 draw calls at 30fps. Switch to DX12 and it is revealed that a single $100 AMD A6-7400 APU can produce 4,400,000 draw calls and get 30 fps. Of course these aren't rendered but you can't render the object if hasn;t been drawn.
    If you are happy with the level of performance that $1500 will get you with DX11 then you should be ecstatic to get very close to the same level of play that DX12and a $100 A6 AMD APU will get you!!!!
    That was the whole point behind Mantle, er (cough, cough) DX12. Gaming is opened up to more folks without massive amounts of surplus CASH.
  • silverblue - Tuesday, May 12, 2015 - link

    Yes, yes, I see your point about AMD's iGPUs benefitting a lot from DirectX 12/Mantle, however I don't think you needed so many posts to make it. Additionally, not benchmarking a specific way doesn't make somebody a liar, it just means they didn't benchmark a specific way.

    Draw calls don't necessarily mean better performance, and if you're memory or ROP limited to begin with... what's more, the performance difference between the 384-shader 7600 and the 512-shader 7850K is practically nothing. Based off this, why would I opt for the 7850K when the 7600 performs similarly for less power? The 7400K is only a little behind but is significantly slower in DX11 testing. Does that mean we don't need the 7600 either if we're playing DX12 titles? Has the test highlighted a significant memory bottleneck with the whole Kaveri product stack that DX12 simply cannot solve?

    In addition, consider the dGPU results. Intel still smokes AMD on a per-FPU basis. By your own logic, AMD will not gain any ground on Intel at all in this area if we judge performance purely on draw calls.

    DirectX 11 is still current. There aren't many Mantle games out there to provide much for this comparison, but I'm sure somebody will have those results on another site for you to make further comparisons.
  • akamateau - Tuesday, May 12, 2015 - link

    There is ONLY ONE BENCHMARK that is relevant to gamers.

    3dMark API Overhead Test!

    If I am considering a GPU purchase I am not buying it becasue I want to Calculate Pi to a BILLION decimal places. I want better gameplay.

    When I am trying to decide on an AMD APU or Intel IGP then that decision is NOT based on CineBench but rather what siliocn produces QUALITY GAMEPLAY.

    You are DELIBERATELY IGNORING DX12 API Overhead Tests and that makes you a liar.

    The 3dMark API Overhead Test measures the draw calls that are produced when the FPS drops below 30. As the following numbers will show the AMD APU will give the BEST GAMING VISUAL EXPERIENCE.

    So what happens when this benchmark is run on AMD APU’s and Intel IGP?
    AMD A10-7700k
    DX11 = 655,000 draw calls.
    Mantle = 4,509,000 Draw calls.
    DX11 = 4,470,000 draw calls.

    AMD A10-7850K
    DX11 = 655,000 draw calls
    Mantle = 4,700,000 draw calls
    DX12 = 4,454,000 draw calls.

    AMD A8-7600
    DX11 = 629,000 draw calls
    Mantle = 4,448,000 draw calls.
    DX12 = 4,443,000 draw calls.

    AMD A6-7400k
    DX11 = 513,000 draw calls
    Mantle = 4,047,000 draw calls
    DX12 = 4,104,000 draw calls

    Intel Core i7-4790
    DX11 = 696,000 draw calls.
    DX12 = 2,033,000 draw calls

    Intel Core i5-4690
    DX11 = 671,000 draw calls
    DX12 = 1,977,000 draw calls.

    Intel Core i3-4360
    DX11 = 640,000 draw calls.
    DX12 = 1,874,000 draw calls

    Intel Core i3-4130T
    DX11 = 526,000 draw calls.
    DX12 = 1,692,000 draw calls.

    Intel Pentium G3258
    DX11 = 515,000 draw calls.
    DX12 = 1,415,000 draw calls.

    These numbers were gathered from AnandTech piece written on March 27, 2015.
    Intel IGP is hopelessly outclassed by AMD APU’s using DX12. AMD outperforms Intel by 100%!!!
  • JumpingJack - Wednesday, May 13, 2015 - link

    "There is ONLY ONE BENCHMARK that is relevant to gamers.

    3dMark API Overhead Test!"

    NO, that is a syntethic, it simply states how many draw call can be made. It does not measure the capability of the entire game engine.

    There is only ONE benchmark of concern to gamers -- actual performance of the games they play. Period.

    Get ready for a major AMD DX12 let down if this is your expectation.
  • akamateau - Tuesday, May 12, 2015 - link

    Legacy Benchmarks?????? i am going to spend money based on OBSOLETE BENCHMARKS???

    CineBench 11.5 was released in 2010 and is obsolete. It is JUNK
    TrueCrypt???? TreuCrypt development was ended in MAY 2014. Another piece of JUNK.

    Where is 3dMark API Overhead Test? That is brand new.

    Where Is STARSWARM?? That is brand new.
  • akamateau - Tuesday, May 12, 2015 - link

    Where are your DX12 BENCHMARKS?
  • akamateau - Tuesday, May 12, 2015 - link

    Where are your DX12 BENCHMARKS?
  • rocky12345 - Tuesday, May 12, 2015 - link

    whining about no DX12 test just take the info that was given & learn from that and wait for a released DX12 program that can truely be tested. testing DX12 at this point has very little to offer because it is still a beta product & the code is far from finished & by the time it is done all the tests you are screaming to have done will not be worth a pinch of racoon crap.
  • galta - Tuesday, May 12, 2015 - link

    Back when DX11 was about be released, AMD fans said the same: nVidia is better @DX10, but with DX11, Radeons superior I-don't-know-what will rule.
    Time passed and nVidia smashed Radeons new - and rebranded - GPUs.
    I suspect it will be the same this time.

Log in

Don't have an account? Sign up now