AMD A8-7650K Conclusion

I've mentioned the story before, but last summer I built a system for my cousin-in-law out of spare parts. His old system, ancient and slow even by the standards when they were made, was still used for basic online browsing and school work. He had no budget, and I cobbled together an MSI motherboard, some DDR3, a mid-range Trinity APU (A8-5500), an AMD GPU and an SSD for him. Understandably he can now play CS:Go, DOTA2, Watch_Dogs and the like at semi reasonable settings in dual graphics mode, as well as watch videos without the processor grinding to a halt. He even plays GTA V at normal settings at his native resolution of 1440x900. The total system budget, if purchased new, would have been around the $300 mark, or console territory. We reused the case and power supply, and he bought a new storage drive, but for his use case it was a night and day change. Building the equivalent system on an Intel backbone would have been a stretch or it would have ended up substituting gaming performance (my cousin-in-law's priority) for other features he didn't care for.

AMD will advertise that they don't just cater to this line of updates, and that the APU line offers more than just an upgrade for entry level gamers. In the majority of our discrete gaming scenarios, this is also true. While the APUs aren't necessarily ahead in terms of absolute performance, and in some situations they are behind, but with the right combination of hardware the APU route can offer equivalent performance at a cheaper rate. This is ultimately why APUs were recommended in our two last big gaming CPU overviews for single GPU gaming, and for integrated gaming. In our new test, it was really interesting to see where the lines are drawn with different CPU and GPU combinations, both integrated and discrete from $70 to $560. One take home test result is our Grand Theft Auto benchmark nearing 60 FPS at 720p Low settings.

Grand Theft Auto V on Integrated Graphics

Grand Theft Auto V on Integrated Graphics [Under 60 FPS]

I confess that I do not game as much as I used to. Before AnandTech I played a couple of games in clan tournaments, and through thick and thin I did well enough on public servers for Battlefield 2142 and BC2, but clan matches were almost always duds. However, with the right hardware or the right software, I get one AAA title a year and usually do the full single player with a bit of multiplayer. That game for 2015 is Grand Theft Auto V, which I was able to benchmark for this review. On its own, an APU can handle 720p at low settings with a reasonable frame rate, meaning that when the drivers are in place, An APU in dual graphics mode running at 60 FPS with decent quality shouldn't be too hard to achieve. For 2015 and 2016, that percentage of frames over 60 FPS metric for GTA should be a holy grail for integrated graphics.

We've actually got a couple more APUs in to test in the form of the A10-7700K and the A6-7400K, which are slightly older APUs but fill in the Kaveri data points we are missing. Stay tuned for that capsule review. Rumor also has it that there will be updates to the Kaveri line soon, although we haven’t had any official details as of yet.

Gaming Benchmarks: GTX 980 and R9 290X
Comments Locked

177 Comments

View All Comments

  • jabber - Tuesday, May 12, 2015 - link

    Exactly.

    "Yayyyy I use 7Zip all day long! "

    Said no one...ever.

    I don't even know why people still compact files? Are they still using floppies? Man, poor devils.
  • Gigaplex - Tuesday, May 12, 2015 - link

    I've been getting BSODs lately due to a bad Windows Update. The Microsoftie asked me to upload a complete memory crash dump. There's no way I can upload a 16GB dump file in a reasonable timeframe on a ~800kbps upload connection, especially when my machine BSODs every 24 hours. Compression brought that down to a much more manageable 4GB.
  • galta - Tuesday, May 12, 2015 - link

    So it makes perfect sense for yoy to stay with AMD...
  • NeatOman - Wednesday, May 13, 2015 - link

    I use it everyday :( rocking a FX-8320@4.5Ghz for the last 3 years.. I picked it up for $180 with the CPU and!! Motherboard. I was about to pick up a 3770k too, saved about $200 but am about 15-20% down on performance. And if you're worried about electrical cost, you're walking over dollars to pick up pennies.

    I do it to send pictures of work I do, and a good SSD is key :)
  • UtilityMax - Tuesday, May 12, 2015 - link

    If you look at the WinRAR benchmark, then that result strongly suggests that WinRAR is multi-threaded. I mean, two core two thread Pentium is clearly slower than the two core but four thread Core i3, and quad-core i5 is clearly faster than Core i3, and Core i7 with its eight threads is clearly faster than Core i5. Hence galta's comment that AMD FX with 8 cores is probably even faster, but he says that this is not normal usage.
  • TheJian - Thursday, May 14, 2015 - link

    There is an actual checkbox in winrar for multithreading for ages now. ROFL. 95% of usenet uses winrar, as does most of the web. That doesn't mean I don't have 7zip installed, just saying it is only installed for the once in 6 months I find a file that uses it.

    You apparently didn't even read what he said. He clearly states he's using winrar and finds FX is much faster using 8 cores of FX in winrar. You're like, wrong on all fronts. He's using winrar (can't read?), he's using FX (why suggest it? Can't read?) AND there is a freaking check-box to turn on multi-threading in the app. Not sure whether you're shilling for AMD here or 7zip, but...jeez.
  • galta - Saturday, May 16, 2015 - link

    Last AMD CPU I had was the old and venerable 386DX@40Mhz. Where any of you alive back in the early 90s?
    Ever since I've been using Intel.
    Of course there were some brief moments during this time when AMD had the upper hand, but the last it happened was some 10 years ago when Athlom and its two cores were a revolution and smashed Pentium Ds. It's just that during that particular moment I wasn't looking for an upgrade so I've Intel ever since.
    Having said that, I have to add that I don't understand why we are spending so much time discussing compression of files.
    Of course that the more cores you have the better, and AMD happens to have the least expensive 8 core processor on the market, BUT most users spend something like 0.15% of their time compressing files, making this particular shinny performance irrelevant for most of us.
    Because most of other software does not scale so good in multithreading (and for games, it has nothing to do with DX12 as someone said elsewhere), we are most likely interested in performance per core, and Intel clearly has the lead here.
  • NeatOman - Wednesday, May 13, 2015 - link

    Truth is the average user won't be able to tell the difference on a system with a i3 running on a ssd and a A6-7400k on a ssd or even a A10-7850k which would be more direct competition to the i3. I build about 2-4 new Intel and AMD systems a month and the only time I myself notice is when I'm setting them up, after that they all feel relitivly close in speed due to the SSD which was the largest bottleneck to have been overcome in the last 10 years.

    So Intel might feel snappier but are still not much faster in day to day use of heavy browsing and media consumtion as long as you have enough ram and a decent SSD.
  • mapesdhs - Tuesday, May 12, 2015 - link

    Ian Cutress wrote:
    > Being a scaling benchmark, C-Ray prefers threads and seems more designed for Intel."

    It was never specifically designed for Intel. John told me it was, "...an extremely
    small program I did one day to figure out how would the simplest raytracer program
    look like in the least amount of code lines."

    The default simple scene doesn't make use of any main RAM at all (some systems
    could hold it entirely in L1 cache). The larger test is more useful, but it's still wise to
    bare in mind to what extent the test is applicable to general performance comparisons.
    John confirmed this, saying, "This thing only measures 'floating point CPU performance'
    and nothing more, and it's good that nothing else affects the results. A real rendering
    program/scene would be still CPU-limited meaning that by far the major part of the time
    spent would be CPU time in the fpu, but it would have more overhead for disk I/O, shader
    parsing, more strain for the memory bandwidth, and various other things. So it's a good
    approximation being a renderer itself, but it's definitely not representative."

    As a benchmark though, c-ray's scalability is incredibly useful, in theory only limited by
    the no. of lines in an image, so testing a system with dozens of CPUs is easy.

    Thanks for using the correct link btw! 8)

    Ian.

    PS. Ian, which c-ray test file/image are you using, and with what settings? ie. how many
    threads? Just wondered if it's one of the stated tests on my page, or one of those defined
    by Phoronix. The Phoronix page says they use 16 threads per core, 8x AA and 1600x1200
    output, but not which test file is used (scene or sphfract; probably the latter I expect, as
    'scene's incredibly simple).
  • Ian Cutress - Tuesday, May 12, 2015 - link

    It's the c-ray hard test on Linux-Bench, using

    cat sphfract | ./c-ray-mt -t $threads -s 3840x2160 -r 8 > foo.ppm

    I guess saying it preferred Intel is a little harsh. Many programs are just written the way people understand how to code, and it ends up sheer luck if they're better on one platform by default than the other, such as with 3DPM.

Log in

Don't have an account? Sign up now