POST A COMMENT

373 Comments

Back to Article

  • geniekid - Tuesday, January 14, 2014 - link

    Would've been nice to see a discrete GPU thrown in the mix - especially with all that talk about Dual Graphics. Reply
  • Ryan Smith - Tuesday, January 14, 2014 - link

    Dual graphics is not yet up and running (and it would require a different card than the 6750 Ian had on hand). Reply
  • Nenad - Wednesday, January 15, 2014 - link

    I wonder if Dual Graphics can work with HSA, although I doubt due to cache coherence if nothing else.

    While on HSA, I must say that it looks very promising. I do not have experience with AMD specific GPU programming, or with OpenCL, but I do with CUDA (and some AMP) - and ability to avoid CPU/GPU copy would be great advantage in certain cases.

    Interesting thing is that AMD now have HW that support HSA, but does not yet have software tools (drivers, compilers...), while NVidia does not have HW, but does have software: in new CUDA, you can use unified memory, even if driver simulate copy for you (but that supposedly means when NVidia deliver HW, your unaltered app from last year will work and use advantage of HSA)

    Also, while HSA is great step ahead, I wonder if we will ever see one much more important thing if GPGPU is ever to became mainstream: PREEMPTIVE MULTITASKING. As it is now, still programer/app needs to spend time to figure out how to split work in small chunks for GPU, in order to not take too much time of GPU at once. It increase complexity of GPU code, and rely on good behavior of other GPU apps. Hopefully, next AMD 'unification' after HSA would be 'preemptive multitasking' ;p
    Reply
  • tcube - Thursday, January 16, 2014 - link

    Preemtion, dynamic context switching is said to come with excavator core/ carizo apu. And they do have the toolset for hsa/hsail, just look it up on amd's site, bolt i think it's called it is a c library.

    Further more project sumatra will make java execute on the gpu. At first via a opencl wrapper then via hsa and in the end the jvm itself will do it for you via hsa. Oracle is prety commited to this.
    Reply
  • kazriko - Thursday, January 30, 2014 - link

    I think where multiple GPU and Dual Graphics stuff will really shine is when we start getting more Mantle applications. With that, each GPU in the system can be controlled independently, and the developers could put GPGPU processes that work better with low latency to the CPU on the APU's built in GPU, and processes for graphics rendering that don't need as low of latency to the discrete graphics card.

    Preemptive would be interesting, but I'm not sure how game-changing it would be once you get into HSA's juggling of tasks back and forth between different processors. Right now, they do have multitasking they could do by having several queues going into the GPU, and you could have several tasks running from each queue across the different CUs on the chip. Not preemptive, but definitely multi-threaded.
    Reply
  • MaRao - Thursday, January 16, 2014 - link

    Instead AMD should create new chipsets with dual AMU sockets. Two A8-7600 APUs can give tremendous CPU and GPU performance, yet maintaining 90-100W power usage. Reply
  • PatHeist - Thursday, February 13, 2014 - link

    Making dual socket boards scale well is tremendously complex. You also need to increase things like the CPU cache by a lot. Not to mention that performance would tend to scale very badly with the additional CPU cores for things like gaming. Reply
  • 5thaccount - Tuesday, January 21, 2014 - link

    I'm not so interested in dual graphics... I am really curious to see how it performs as a standard old-fashioned CPU. You could even bench it with an nVidia card. No one seems to be reviewing it as a processor. All reviews review it as an APU. Funny thing is, several people I work with use these, but they all have discrete graphics. Reply
  • geniekid - Tuesday, January 14, 2014 - link

    Nvm. Too early! Reply
  • boozed - Tuesday, January 14, 2014 - link

    You must be a hoot at parties. Reply
  • boozed - Wednesday, January 15, 2014 - link

    And I hit reply on the wrong bloody comment. My apologies... Reply
  • monsieurrigsby - Wednesday, January 29, 2014 - link

    I'm a bit slow to the party, but talk of discrete GPUs leads me to the main question I still have that I don't see explained (possibly because the authors assume deeper understanding of CPU/GPU programming), and haven't seen discussed elsewhere. (I've not looked *that* hard...)

    If you have a Kaveri APU and a mid/high-end discrete GPU that won't work with Dual Graphics (if it arrives), what processing can and can't use the on-APU GPU? If we're talking games (the main scenario), what can developers offload onto the onboard GPU and what can't they? What depends on the nature of the discrete card (e.g., are modern AMD ones 'HSA enabled' in some way?)? If you *do* have a Dual Graphics capable discrete GPU, does this still limit what you can *explicitly* farm off to the onboard GPU?

    My layman's guess is that GPU compute stuff can still be done but, without dual graphics, stuff to do with actual frame rendering can't. (I don't know enough about GPU programming to know how well-defined that latter bit is...)

    It's just that that seems the obvious question for the gaming consumer: if I have a discrete card, in what contexts is the on-APU GPU 'wasted' and when could it be used (and how much depends on what the discrete card is)? And I guess the related point is how much effort is the latter, and so how likely are we to see elements of it?

    Am I missing something that's clear?
    Reply
  • monsieurrigsby - Wednesday, January 29, 2014 - link

    Plus detail on Mantle seems to suggest that this might provide more control in this area? But are there certain types of things which would be *dependent* on Mantle?
    http://hothardware.com/News/How-AMDs-Mantle-Will-R...
    Reply
  • nissangtr786 - Tuesday, January 14, 2014 - link

    I told amd fanboys the fpu on intel and the raw mflops mips ofintel cpu destroy current a10 apus, its no real suprise all those improvement show very little in benchmarks with kaveri steamroller cores. amd fanboys said it will reach i5 2500k performance, I said i3 4130 but overall i3 4130 will be faster in raw performance and I am right. I personally have an i5 4430 and it looks like i5's still destroy these a10 apu in raw performance.

    http://browser.primatelabs.com/geekbench3/326781
    browser.primatelabs.com/geekbench3/321256
    a10-7850k Sharpen Filter Multi-core 5846 4.33 Gflops
    browser.primatelabs.com/geekbench3/321256
    i5 4430 Sharpen Filter Multi-core 11421 8.46 Gflops
    Reply
  • gngl - Tuesday, January 14, 2014 - link

    "I personally have an i5 4430 and it looks like i5's still destroy these a10 apu in raw performance."

    You seem to have a very peculiar notion of what "raw performance" means, if you're measuring it in terms of what one specific benchmark does with one specific part of the chip. There's nothing raw about a particular piece of code executing a specific real-world benchmark using a particular sequence of instructions.
    Reply
  • chrnochime - Tuesday, January 14, 2014 - link

    Who cares what CPU you have anyway. If you want to show off, tell us you have at least a 4670k and not a 4430. LOL Reply
  • keveazy - Tuesday, January 14, 2014 - link

    It's relevant that he used the i5 4430 in his comment. Compare the price range and you'll see. These AMD apu's are useless unless your just looking to build a PC that's not meant to handle heavily threaded tasks. Reply
  • tcube - Thursday, January 16, 2014 - link

    Ok... heavily threaded tasks ok... examples! Give me one example of one software 90% of pc users use 90% of the time that this apu can't handle... then and ONLY then is the cpu relevant! Other then that it's just bragging rights and microseconds nobody cares about on a PC!

    Instead we do care to have a chip that plays anything from hd video to AAA 3d games and also is fast enough for anything else and don't need a gpu for extra cost, power usage heat and noise! And that ain't any intel that fits on a budget!
    Reply
  • keveazy - Saturday, January 18, 2014 - link

    I'll give you 1 example. Battlefield 4. Reply
  • HanzNFranzen - Saturday, January 18, 2014 - link

    Yea, 90% of people use Battlefield 4 90% of their time on the PC... You missed the question. Reply
  • keveazy - Saturday, January 18, 2014 - link

    Doesn't matter.
    If you buy a PC, it's better to make sure it's solid and ready to handle applications that require strong physical cpu performance unless your still living the 90s dude.

    My point is, AMD's highest end kaveri 7850k chip today is priced at the same range as Intel's low end i5 cpus. From here, you take your pic. If you wanna have a system that has worse graphics than ps4, go AMD.
    Reply
  • kmi187 - Sunday, January 19, 2014 - link

    If you've actually lived in the 90's you would know that cpu power was a lot more important back then than it is now. You didn't have hardware (gpu) acceleration for video and all that jazz to put less stress on the cpu.

    Also you do realize the PS4 runs on AMD hardware right?
    Reply
  • medi02 - Tuesday, January 28, 2014 - link

    What on earth are you talking about? What is a "solid PC"? One that fails at games? Reply
  • vAngz - Sunday, January 26, 2014 - link

    I believe you and others are missing the point. Can you play Battlefield 4 on medium settings @ 1080p using only the i5 4430, without using a discrete graphics card, and it still be playable above 30fps?

    Even though the chips are the same price (at most places), you can actually play BF4 using only the A10 7850K, which is not possible with the i5 4430. I bring this up mainly because you brought up BF4. When, rather if, Mantle is ever activated on BF4, things will change and we will see even better performance on the A10-7850K and later gen AMD APUs.

    So, yes, you can get better performance out of the i5 4430, but you will need to spend more money on a discrete graphics card to use it for gaming, such as for BF4. I believe this what sets the AMD APUs apart from Intel offerings at the moment. We need to compare apples to apples.

    I could be wrong, but I haven't seen anywhere in my research where anyone is getting that kind of performance out of i5 4430 without a discrete graphics card added into the mix. If you have a link on such info please share it with us. Thanks.
    Reply
  • keveazy - Friday, February 07, 2014 - link

    Your not getting it either. My mistake in my previous post is I didn't mention Battlefield 4 Multiplayer.
    That's where the APU will fail. It will fail in both the Cpu performance and Gpu performance. The APU is still the better choice at light gaming.
    Reply
  • theduckofdeath - Wednesday, January 15, 2014 - link

    I think he's referencing that processor simply because it's a pretty powerful, fairly low-priced processor for all of those who can live without all of those over-clocker tweaks... Reply
  • just4U - Tuesday, January 14, 2014 - link

    You talk about 2500K performance and yet the majority of people I come across are not even working with that. The vast majority are still in the C2D/8800 like performance arena. What would be nice to see from some of these review sites in their performance analysis is if stuff like this makes sense to finally bite the bullet and get rid of the old dog.. Reply
  • just4U - Tuesday, January 14, 2014 - link

    Ian .. that's something I think you should look at btw.. we do get a fair number of lurkers /w some posting up questions like that about how it compares to the old warhorses their on. Hell even for those of us with old parts kicking around it's something to consider. Do we scrounge up some cheap ddr2 psu.. hand me down hard drives and pair it up for that box in front of the television or do we say no.. this makes far more sense and it's new. Reply
  • Ian Cutress - Tuesday, January 14, 2014 - link

    I did a look back to C2D this time last year: http://www.anandtech.com/show/6670/dragging-core2d...

    When I get into the swing of testing for Gaming CPU viability again, I'll make sure it is part of the testing matrix.
    Reply
  • ImSpartacus - Tuesday, January 14, 2014 - link

    I've been reading Anadtech since I was in high school and that has to be my favorite article. I reference that article constantly.

    It's so hard to find reliable & exhaustive benchmarks of old CPUs. If you could update it every 2 years, I would love you forever!
    Reply
  • just4U - Tuesday, January 14, 2014 - link

    I read that article and it got me to thinking.. Maybe what is needed is not a direct comparison with new and competing products (which companies may not like..) but rather something stand alone that gets refreshed like your E6400 article. It sets the bar on what the reviewer (and likely most of us) think is needed these days..I know for myself I see a lot steps sideways in the computer industry but it's no longer leaps ahead like it once was. Reply
  • alyarb - Wednesday, January 15, 2014 - link

    That would be great. I have a C2Q at 3.7 GHz and a 5850 at 800 MHz. Sure, that is >350W under load, but it still gets the job done at 1080p even in 2014. I have tried and can't justify replacing it all just yet.

    Similarly I'm not surprised to see Llano is not at the bottom of these charts and is still within striking distance of Kaveri in a lot of the tests. One day I'd like to see the past 8 or 10 years of CPUs all put through the same battery of 2013-2014 tests.

    Integration and new features are all welcome, but let's take a look, as performance skeptics, at how far we've really come all this time.
    Reply
  • anubis44 - Saturday, January 18, 2014 - link

    "I have a C2Q at 3.7 GHz and a 5850 at 800 MHz. Sure, that is >350W under load, but it still gets the job done at 1080p even in 2014. I have tried and can't justify replacing it all just yet."

    Try playing Company of Heroes 2 (my current favourite) on that rig, and understand the meaning of the word 'obsolete'. That game will bring that system to it's knees, and it won't be pretty.
    Reply
  • just4U - Sunday, January 19, 2014 - link

    Throw in a 760 or a 280x then.. see if it's still brought to it's knees.. Hell a 270/x might do.. it's substantially faster than the 5850 as well. Reply
  • SofS - Friday, January 17, 2014 - link

    Careful when comparing older processors regarding the memory subsystem since without the integrated controller they are very sensitive to memory performance or at least my C2Q 9550 @3410MHz seems to be. In my case the upgrade to to a G.Skill F3-12800CL6-2GBXH dual kit I made some years ago was meaningful and some other readers here on similar platforms might find that only upgrading the RAM would give them headroom enough to avoid a whole new system purchase for a while longer. Currently I also own a i7-4800MQ based notebook with dual KHX1600C9S3/8G and while noticeably faster for some cases it does not really enable me to game at higher settings than my desktop system given the GPU being a GTX 765M. Going forward a GPU upgrade to the desktop system is all I am looking for. Reply
  • RussianSensation - Friday, January 17, 2014 - link

    Sorry, but you may have a confirmation bias here. You bought new memory expecting the system to perform much faster but years and years of personal ownership of C2D/C2Q systems and online reviews show that it hardly performed faster with faster memory. That architecture in fact performed faster with tighter latency. Your kit doesn't even have 5-5-5-15 timings. C2Q 9550 @ 3.4ghz is a slow CPU compared to Core i7 4770 @ 4.5ghz for gaming. Your memory upgrade may have netted you an extra 2-3% increase on average at best. Reply
  • SofS - Friday, January 17, 2014 - link

    Come to think of it, maybe the amount was more important, besides going down to CL6 previously it was 2x1GB instead of 2x2GB. If that is the case then my 765M must be holding the 4800MQ back for gaming or something else is very wrong. Currently the only games I play that do not perform properly at 1080p are Witcher2 and Tomb Raider, probably that has more to do with the GPU than the CPU/RAM tough the real question is if a better next generation mid range GPU would still be able to work properly with them. Reply
  • Albangalo - Wednesday, January 15, 2014 - link

    While it's not to do with Kaveri, Tom's Hardware did some articles comparing current cpus with older ones:
    Intel Ivy vs c2d & c2q: http://www.tomshardware.com/reviews/ivy-bridge-wol...
    AMD fx vs k10: http://www.tomshardware.com/reviews/piledriver-k10...
    Reply
  • SofS - Wednesday, January 22, 2014 - link

    Following your links and looking around I found:
    http://www.tomshardware.com/reviews/core-memory-sc...

    It links to previous similar articles concerning the Phenon II and the i7 of the time (975). Seems that indeed the C2Q does not benefit much from memory improvements compared to the other two, but there is a difference. This and all of those three cases are relevant since all three models were very popular. Also, I remember choosing the on time smaller modules for my first kit whit this particular system since they were the only reasonable DDR3 modules at 1600 within reach, albeit I never managed to stabilize it at CL6. On the other hand the latter I upgraded with got CL6 from XMP since the beginning while being larger. Given that memory is very cheap compared to the whole system plus the cost of repurchasing non portable software then this (maybe also a new GPU) might just be the final push needed to wait for the next generation native DDR4 systems for many.
    Reply
  • fokka - Tuesday, January 14, 2014 - link

    i understand your sentiment, but then again, about every modern mainstream cpu should destroy a c2d and even quad in raw performance. and you even get relatively capable integrated graphics included in the package, so about everyone even moderately interested in computing performance and efficiency "should bite the bullet" if he's got a couple hundred bucks on the side. Reply
  • just4U - Wednesday, January 15, 2014 - link

    and that's the problem.. their not. "It's good enough" Numbers are.. just that numbers. We hit a wall in 2008 (or there abouts..) and while performance kept increasing it's been in smaller increments. Over the span of several generations that really can add up but not the way it once did.

    It used to be you'd get on a old system and it would be like pulling teeth because the differences were very noticeable and in some cases they still are.. but for the most part? Not so much.. not for normal/casual usage. There is a ceiling .. Athlon X2s P4s? No.. you'll notice it.. Quad 8x Core2? hmmm.. How about a socket 1366 cpu or the 1156 stuff? Or the PIIs from AMD. Those people should upgrade? Certainly if their board dies and they can't replace.. but otherwise not so much.
    Reply
  • just4U - Wednesday, January 15, 2014 - link

    That should have read Quad 8x series Core2s.. anyway these days It seems like we do a lot more change out video, add in ssd, increase ram, rather then build systems from the ground up as systems can stick around longer and still be quite viable. Yes/no? Reply
  • tcube - Thursday, January 16, 2014 - link

    Totaly agree. We're led to believe that we need to upgrade every 2 years or so... yet a great many are still using old cpu's even dual cores with new software and os without a care in the world. Because there is no noticeable improvement in cpu usage. Cpu power became irrelevant after C2Q nothing beyond that power is justifiable in normal home or office usage. Certainly certain professional users will want a cheap workstation and will buy into the highend pc market likewise extreme gamers or just for bragging rights. But thinking that for anything from browsing to medium photoshop usage or any moderate videoediting software use will REQUIRE anything past a quadcore like lowend i5's or this kaveri is plain false. You will however notice the lack of a powerful gpu when gaming or doing other gpu intensive tasks... so amd has a clear winner here.

    I do agree it's not suited for heavy x86 work... but honestly... most software stacks that previously relied heavily on cpu are moving to opencl to get a massive boost from the gpu... photoshop being just one of many... so yeah the powerful gpu on kaveri is a good incentive to buy, the x86 performance is better then richland which is sufficient for me(as i currently do use a richland cpu) so...
    Reply
  • Syllabub - Friday, January 17, 2014 - link

    I am not going to try and pick a winner but I follow your line of reasoning. I have a system with a e6750 C2D and Nvidia 9600 that still gets the job done just fine. It might be described as a single purpose type of system meaning I ask it to run one or possibly two programs at the same time. What I think is pretty wild is that when I put it together originally I probably sank something close to $250 into the CPU and GPU purchase while today I potentially get similar performance for under $130 or so. The hard part is buying today in a manner that preserves a level of performance equivalent to the old system; always feel the tug to bump up the performance ladder even if I don't really need it. Reply
  • Flunk - Thursday, January 16, 2014 - link

    That doesn't really make sense unless you also include equivalently-priced current Intel processors. People may be moving on from Core 2s but they have the opportunity to buy anything on the market right now, not just AMD chips. Reply
  • PPB - Tuesday, January 14, 2014 - link

    Adding a $350 CPU plus $50 GPU to a iGP gaming comparison = Anandtech keeping it classy. Reply
  • MrSpadge - Tuesday, January 14, 2014 - link

    You do realize they're not recommending this in any way, just showing the full potential of a low-end discrete GPU which wouldn't be bottlenecked by any modern 3+ core CPU? Reply
  • Homeles - Tuesday, January 14, 2014 - link

    PPB being an ignorant critic, as usual.

    "For reference we also benchmarked the only mid-range GPU to hand - a HD 6750 while connected to the i7-4770K."
    Reply
  • DryAir - Tuesday, January 14, 2014 - link

    So at playble settings (30 fps+) kaveri is no better than richland. And both get outperformed by Iris Pro. Reply
  • jeffkibuule - Tuesday, January 14, 2014 - link

    That CPU with Iris Pro costs $450 compared to these AMD chips which are far less expensive. Reply
  • takeship - Tuesday, January 14, 2014 - link

    Only if you spring for the i7 variant. The i5 variant is ~300$. Still a premium over Kaveri, but you're also getting nearly double the CPU power. Reply
  • mr_tawan - Tuesday, January 14, 2014 - link

    Also Iris comes only with R-variant of i5/i7. It won't run on your mainboard, so forget about DIY machine. You can always get a dGPU for that kind of machine though. Reply
  • just4U - Tuesday, January 14, 2014 - link

    Is that why you can't buy the R variant at the usual places? I wasn't even aware that they are not compatible with regular 1150 boards.. hmmm.. that's to bad. I just thought they were in low supply or going to OEMs before they hit the retail channels. Reply
  • Gigaplex - Wednesday, January 15, 2014 - link

    I'm pretty sure they only come in soldered-on variants. Reply
  • Klimax - Friday, January 17, 2014 - link

    Correct, however some rumors say, that it won't be case with Broadwell/Haswell-refresh anymore. (Depends who is correct) Reply
  • thevoiceofreason - Tuesday, January 14, 2014 - link

    Precisely. All those percentage improvement at 1080p graphs look very nice but are in the end moot if you realize you are looking at 12fps.

    At the end of the day, you can barely play at 720p.
    Reply
  • methebest - Tuesday, January 14, 2014 - link

    he needed to test them on low settings at 1080p. Reply
  • Principle - Tuesday, January 14, 2014 - link

    Thats because they dont do these APUs any justice with their review strategy. They basically want to push GPUs, so they do not highlight the APU's actual capabilities. For example, how about some qualitative subjective analysis, rather than all of this easily comparative quantitative nonsense. I want to know at 1080p, what settings have to be turned down to be playable. Why in the world would they run these at Extreme settings??? Its absurd, unless you're trying to sell dGPUs. Where the AMD solutions really have an advantage is in the details. You likely can turn off AA and see the AMD FPS double, without nearly as big of jump by the Intel IGP where you would likely have to turn everything off or on low. Reply
  • Nagorak - Wednesday, January 15, 2014 - link

    The fact is these "APUs" are really pretty worthless. If you're going to be seriously gaming then you want to get a discrete graphics card, even a low end one. The problem with AMD's strategy here is that the integrated GPU performance is still so anemic that it's only useful for people who want a budget PC and who don't actually play games. Therefore the "benefits" of having both CPU and GPU on one core are highly questionable. Reply
  • ImSpartacus - Thursday, January 16, 2014 - link

    That could be solved if we had a much larger GPU on the APU, right?

    I want to see at least 12 CUs on the top-end APU, preferably more like 16-20 CUs.
    Reply
  • lmcd - Monday, January 20, 2014 - link

    Did you miss the whole conversation about memory bandwidth? Reply
  • nader_21007 - Saturday, January 18, 2014 - link

    below the charts showing the games performanc, there are two tabs, one for showing avarage frame rate and the other, if you just click on it will show you the suddeframe rates for the apus compared. Minimum frame rate of the iris pro is the lowest and worst among all Apu's compared. It means sudden hangings all over the game. Kaveri and other amd apu's have no such weaknesses because of high performing IGP.Now tell me which is Outperformed? Reply
  • themeinme75 - Saturday, January 18, 2014 - link

    the good news is APU-7850/SOC-5200pro just caught up to 4850.. Reply
  • frozentundra123456 - Tuesday, January 14, 2014 - link

    Good review, except he doesn't really address the elephant in the room that even for gaming, a low end cpu like the athlon X4 with a HD7750 will be considerably faster than any APU. So in this regard, I disagree with the conclusions that for low end gaming kaveri is the best solution. It is disappointing actually, that he did not use a HD7750 GDDR5 as the discrete gpu comparison, because that would have given a more direct comparison of how the bandwidth restrictions are affecting Kaveri.

    I will say though that the low TDP parts seem to get a nice improvement in performance. They actually seem more attractive that the high end, since there is little gaming improvement in the top end vs Richland.
    Reply
  • thevoiceofreason - Tuesday, January 14, 2014 - link

    Or something like Pentium G2120 (90$ newegg) and HD7770 (90$ ASUS or MSI on newegg after rebate). The difference between such a setup and a top of the line kaveri chip is not even funny. Kind of tragic, really. Reply
  • andrewaggb - Tuesday, January 14, 2014 - link

    I agree. I am extremely dissappointed. Cpu performance and GPU performance barely changed from last year's model. HSA is definitely the future, and Mantle may deliver a healthy fps improvement, but I feel like this product has almost no value at launch. It's all depending on future software.

    I was actually considering a kaveri for a kids pc... but I'm definitely way better off getting an intel cpu and amd gpu.
    Reply
  • andrewaggb - Tuesday, January 14, 2014 - link

    I guess they did make big gains on TDP. So it's sorta their version of a haswell refresh. But haswell had a larger improvement in both cpu and gpu performance relative to their previous chip. Reply
  • nader_21007 - Saturday, January 18, 2014 - link

    Can you show me what improvement haswell did over previous gen? TDP going from 77W to 84W, meanwhile performance droped in most cases. Can't you see the charts in this review? Reply
  • Principle - Tuesday, January 14, 2014 - link

    Andrew, that depends based on size, budget, etc...and I own an AMD Piledriver CPU and could never tell you when it was supposedly slower, maybe a game takes a couple seconds longer to load, but after that its all the same.

    And I have used Intel CPUs too, and have hiccups and lag multitasking with them in real life, that never happens on my AMD systems. If you get an i5 and an AMD GPU, that would be great and last with the GPU compute advantage of AMD GPUs and the Mantle potential.

    These Kaveri have a lot of value at launch for the entertainment center PCs, or ITX platforms because at 65W or even 45W it delivers a lot of performance in one chip that you can keep cool and quiet in a small package. Also good for all in one PCs built into the monitor. Not for the avid gamer right now, but a little more future proof than an Intel CPU in my opinion.
    Reply
  • ImSpartacus - Thursday, January 16, 2014 - link

    If you're not gaming, is it really that hard to "future-proof" your CPU?

    I feel like most low end CPUs will perform "basic" non-gaming tasks for many years to come.
    Reply
  • andrewaggb - Tuesday, January 14, 2014 - link

    To be clear, I'd get an i5 quad core with a 260x or 270x. I realize they aren't at all in the same price range, but it's good performance per dollar.

    I was expecting Kavari to have 10% better cpu performance and 25% better gpu performance. This has equal cpu performance and essentially equal gpu performance. It has other improvements, but that's a serious dissappointment on the performance side of things.

    I've already got 3 i5 quad cores with a 6870, 7850, and 270x in each and I'm happy with them. Just though Kaveri might be good enough, and it is for older stuff and minecraft and whatnot.
    But it seems like yet another year that paying the extra money and having some longevity is going to be the right move.

    Quite frankly my oldest system, the i5 750 with a 6870 would mop the floor with kaveri in everything but power consumption.
    Reply
  • yankeeDDL - Wednesday, January 15, 2014 - link

    You're kidding right?
    It practically doubled the performance per watt of Richland (45W Kaveri almost always outpaces 100W Richland) and that's disappointing?
    It's true that Richland was way behind, but the improvement is massive.
    There's still a glaring gap with Intel's CPU, but it is smaller.
    Just as much as the glaring gap on the GPU side (but this time on AMD's favor) got wider.
    HSA is the key for AMD to push the GPU advantage over to the CPU to compensate. If it works, then Kaveri will be really up to, or better of the core I5 which cost more than 2X ... "IF" ...
    Reply
  • Jaybus - Thursday, January 16, 2014 - link

    I'm not convinced HSA is the future. It is a diminishing returns issue. The only difference between HSA and SMP is different types of cores are being used. The bus arbitration and coherency issues are exactly the same. Neither is scalable to dozens of cores, let alone hundreds. HSA has the same limitations as SMP. Something like Knights Corner's ring bus and message passing is more likely the future. Near term, there is an advantage to HSA. Long term will rely on a much faster chip-to-chip interconnect to transfers and segmented memory to avoid the arbitration and coherency issues. CMOS silicon photonics maybe. That would enable optical busses orders of magnitude faster than PCIe, or in fact much faster than any chip-to-chip electronic bus, and that would make something like Knights Corner's ring bus the future path to high core counts. Reply
  • jimjamjamie - Thursday, January 16, 2014 - link

    A genuinely interesting and insightful comment, thanks. Reply
  • artk2219 - Tuesday, January 14, 2014 - link

    Until you play a game that uses more than 2 threads, or have tasks running in the background while gaming, then you'll wish you had those two extra threads. Seriously I wish people would quite trying to recommend dual cores for gaming or even general use, unless its in a machine for the type of person that only does one or two things thing at a time. Dual cores are showing their age now, its only going to be worse a year or two from now. Also why would you spend 90 on a Pentium dual core when you could spend 80 on an Athlon 750k or that same 90 on a 760k. They have similar single thread performance and stomp the g2120 in multithreaded situations, plus they're unlocked so you can overclock to your hearts content. Im not saying that Kaveri isn't overpriced right now, they could stand to drop 20 dollars for the top two chips and 10 for the last chip reviewed. But they just launched and those prices will change, and in the end its easier to point people to one part for all of their needs than it is to point them to two.

    http://www.newegg.com/Product/ProductList.aspx?Sub...
    Reply
  • Nagorak - Wednesday, January 15, 2014 - link

    The Intel processors are more energy efficient. That's one reason. Reply
  • artk2219 - Wednesday, January 15, 2014 - link

    Fair enough, but its a negligible difference once you factor in the discrete GPU that you would be pairing it with anyways. Cooling it shouldn't be anymore of a problem than cooling the same setup with the DGPU, granted there aren't really any fm2+ itx boards so that may be a problem if you're going for a tiny size, but thats about it. Reply
  • retrospooty - Tuesday, January 14, 2014 - link

    "a low end cpu like the athlon X4 with a HD7750 will be considerably faster than any APU. So in this regard, I disagree with the conclusions that for low end gaming kaveri is the best solution."

    I get your point, but its not really a review issue , its a product issue. AMD certianly cant compete inthe CPU arena. They are good enough, but nowhere near Intel 2 generations ago (Sandy Bridge from 2011). They have a better integrated GPU, so in that sense its bte best integrated GPU, but as you mentioned, if you are into gaming, you can still get better performance on a budget by getting a budget add in card, so why bother with Kaveri?
    Reply
  • Homeles - Tuesday, January 14, 2014 - link

    "I get your point, but its not really a review issue , its a product issue."

    Well, the point of a review is to highlight whether or not a product is worth purchasing.
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    I agree. He should have made analysis from the viewpoint of different computer purchasers. Just one paragraph would have worked, to fill in the blanks.. something like these -
    1. the gamer who will buy a pricier discrete GPU
    2. the HTPC builder
    3. the light gamer + office productivity home user
    4. the purely office productivity type work person
    Reply
  • just4U - Tuesday, January 14, 2014 - link

    I can understand why he didn't use a 7750/70 with GDDR5 ... all sub $70 video cards I've seen come with ddr3. Your bucking up by spending that additional 30-60 bucks (sales not considered) Reply
  • Computer Bottleneck - Tuesday, January 14, 2014 - link

    The R7 240 GDDR5 comes in at $49.99 AR---> http://www.newegg.com/Product/Product.aspx?Item=N8...

    So cheap Video cards can have GDDR5 at a low price point.
    Reply
  • just4U - Tuesday, January 14, 2014 - link

    That's a sale though.. it's a $90 card.. I mean sure if it becomes the new norm.. but that hasn't been the case for the past couple of years. Reply
  • ImSpartacus - Thursday, January 16, 2014 - link

    Yeah, if you get aggressive with sales, you can get $70 7790s. That's a lot of GPU for not a lot of money. Reply
  • yankeeDDL - Tuesday, January 14, 2014 - link

    Do you think that once HSA is supported in SW we can see some of the CPU gap reduced?
    I'd imagine that *if* some of the GPU power can be used to help on FP type of calculation, the boost could be noticeable. Thoughts?
    Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    Yes, that is probably why the CPU floating point calculation isn't as strong, but we won't see that until developers use OpenCL and HSA. Most likely the big selling point in the immediate future (3 to 6 month) will be Mantle since it is already being implemented in games. HSA and OpenGL 2.0 are just starting to come out, so we will probably see more news on that 6 months from now with partial support in some application and full support after a year. If the APUs in the Playstation 4 and Xbox One are also HSA supported, we will see more games make use of it before general desktop applications. Reply
  • yankeeDDL - Tuesday, January 14, 2014 - link

    Agreed. I do hope that the gaming consoles pave the way for more broad adoption of these new techniques. After all, gaming has been pushing most of the innovation for quite some time now.
    CPU improvement has been rather uneventful: I still use a PC with an Athlon II X2 @ 2.8GHz and with a decent graphic card is actually plenty good for most of the work. That's nearly a 5 year old CPU and I don't think there's a 2X improvement even going to a core i3. In any case, there have to be solution to improve IPC that go beyond some circuit optimization, and HSA seems promising. We'll all have to gain if it happens: it would be nice to have again some competition non the CPU side.
    Reply
  • jasonelmore - Tuesday, January 14, 2014 - link

    I really wish these were launching in BGA GDDR5 Laptop/Mini ITX Packages. Reply
  • jaydee - Tuesday, January 14, 2014 - link

    Pretty much what I was thinking as well. There are two mini-ITX FM2+ motherboards available on newegg, niether are "thin", and neither have DisplayPort. AMD's opportunity here is to market it's 45W Kaveri as the best CPU/GPU for the price in a small package. They NEED to get outside of the typical ATX, micro-ATX, mini-ATX box and into SFF, have all the ports that everyone wants, be creative with packaging and configurations (like GDDR5). They will never win a war with Intel in the traditional form-factor PCs, which is a rapidly shrinking market anyways. Reply
  • takeship - Tuesday, January 14, 2014 - link

    Agreed. Any build not restricted to half height GPUs is better off going with a cheap intel cpu & discrete card. AMD really should be targeting ultra SFF type builds where Iris Pro is thermally limited, and a dGPU isn't an option. Reply
  • rhx123 - Tuesday, January 14, 2014 - link

    GDDR5 7750's are available half height and at a decent price point, so even in a Low Profile machine a cheap Intel + 7750 is a better option. That's what I'm running anyway. Passively cooled i3 never reaches above 65c and the Saphire 7750 Low Profile is pretty quiet at idle. Reply
  • Mopar63 - Tuesday, January 14, 2014 - link

    The last paragraph of this article shows someone that GETS IT, where Kaveri and the APU design in general is heading. Reply
  • nissangtr786 - Tuesday, January 14, 2014 - link

    I can't believe how right I was, I was saying i3 4130 cpu performance and 2400-2500 3dmark11 gpu performance similar to my gt650m in my laptop. Funnily enough my laptop with i5 3210m at 2.9ghz gt650m with screen takes 87w, if I had an i3 4130 it would take about 92w lets say so it is about right. I am more shocked how spot on I was. Reply
  • HammerStrike - Tuesday, January 14, 2014 - link

    The entire Anand reader base congratulates you on your deep insight and prophetic powers of deduction. Reply
  • nathanddrews - Tuesday, January 14, 2014 - link

    I, for one, welcome our new nissangtr786 overlord. Reply
  • Zorba - Tuesday, January 14, 2014 - link

    It would have been nice to see some non-integrated chips added to the benchmarks like an FX-6300. Ever since the APUs came out, it seems no reviews actually compare high-end iGPU vs moderate dGPU and CPU. Looking at the price, you could get a decent CPU+GPU for the cost of the A10-7850K, so it would be nice to see that as an option. Reply
  • R3MF - Tuesday, January 14, 2014 - link

    "do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower?"

    Yes, and No.

    I do want a higher-end APU, but I'd like to see one with four CPU modules and 256 shaders:

    47% of the kaveri die space is GPU
    http://www.extremetech.com/wp-conten...d-to-intel....
    If you consider that roughly 20% is uncore, that leaves roughly 33% as CPU.
    Give or take, 8 shader cores is fifty percent larger than 4 cpu cores.
    You could double that cpu portion to 66%, and still leave 14% for shader cores.
    Make the total die size just 10% bigger and you have an 8 cpu core APU with 4 HSA enabled shader cores ready to grind through FPU work. pretty much die-size neutral.
    Reply
  • dbcoopernz - Tuesday, January 14, 2014 - link

    I'd like an APU with enough GPU power to run all the high quality options in MadVR. Would make a very nice HTPC chip. Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    I think Mantle can make that happen, but will see how much support they get. Reply
  • MrSpadge - Tuesday, January 14, 2014 - link

    Mantle has nothing to do with GP-GPU, that's not using DirectX anyway. Reply
  • JDG1980 - Tuesday, January 14, 2014 - link

    My discrete 7750 couldn't handle Jinc scaling in MadVR (at least not without dropping frames on some 1080i test clips), so this is going to be another generation or two in the future.

    The PS4 APU could probably do it, if that was available in a generic PC form factor.
    Reply
  • MrSpadge - Tuesday, January 14, 2014 - link

    Add to that DDR4 and/or 4 memory channels, or at least a large on-package buffer like Crystal Well. Reply
  • yankeeDDL - Wednesday, January 15, 2014 - link

    But the whole point of HSA is to get the GPU to do CPU work that it could do better (like FP).
    So you wouldn't need more CPU cores at all.
    Look at it this way: AMD's CPU is less efficient than Intel's, while the GPU is more efficient.
    Having a CPU-imbalanced APU, would put it in a tough(er) spot to compete against Intel. A GPU imbalanced, as Kaveri is, would improve the lead than it already has on the GPU side.
    Now imagine that HSA kicks in, and the GPU lead translates directly in CPU lead ...
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    This is true. I hope we see more articles as adoption of HSA starts to take hold.

    It is too bad they are far behind in CPU power, but AMD has the right strategy. Either way, some things are better done on the GPU. AMD just has more benefit than Intel to get things moving that way sooner with their GPU advantage and CPU disadvantage. Intel will have no choice but to follow that lead.
    Reply
  • nissangtr786 - Tuesday, January 14, 2014 - link

    http://www.extremetech.com/computing/174632-amd-ka...

    hsa does well on amd main thing they marketed in libreoffice.
    Reply
  • Ryan Smith - Tuesday, January 14, 2014 - link

    I'm normally not one to speak about other articles, but those are all OpenCL benchmarks. The OpenCL HSA driver won't be released for another quarter. And the HSA SDK is similarly far off.

    http://www.anandtech.com/show/7677/amd-kaveri-revi...
    Reply
  • krumme - Tuesday, January 14, 2014 - link

    What a bencmark of a review. I learned a lot. Great. Thanx. Reply
  • DanNeely - Tuesday, January 14, 2014 - link

    This is the first I've heard that Excavator will be the end of the line for the current AMD core. Is there any information about what's coming next publicly available yet? Reply
  • JDG1980 - Tuesday, January 14, 2014 - link

    It's all speculation because AMD hasn't released any roadmaps that far in advance. If I had to guess, I'd say they will probably beef up the "cat" cores (Bobcat -> Jaguar, etc.) and use that as their mainstream line. That would be similar to what Intel did when they were faced with a situation like this - they scaled up the mobile Pentium M to become the Core 2 Duo. Reply
  • jabber - Tuesday, January 14, 2014 - link

    The great shame of these chips is the real market they should be selling in will never take off. These are perfect all round chips for those folks that buy a family PC in the usual PC mega store. That family PC would be your usual Compaq/Acer desktop with a decent enough Intel chip in it but the crappy Intel IGP only.

    But as AMD never advertises to these people (the people who should be buying this stuff) they will never buy them. The demand will never appear. They have heard of Intel, they hear the Intel jingle on the TV several times a week. But AMD? Never heard of them, they cant be any good.

    Has anyone at Anandtech ever got round to interview the lazy idiot in the AMD marketing dept? Does AMD really have a marketing dept?

    AMD, sometimes you do have to push the boat out and make the effort. Really stick it under ordinary peoples noses. Don't bother keep brown-nosing the tech review sites cos most of their readers don't buy your stuff anyway.
    Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    AMD can't market the APUs directly to the average consumers. They just buy what the PC mega-store sells to them. AMD should convince the OEMs, and that is _really_ hard. First is the issue of Intel quasi-monopoly. Intel always browbeat the major EOMs to ignore AMD. Even after losing the lawsuit, I think this effect still exists. And then next issue is that, your typical average consumer does not play on PC. They play on consoles. In fact, hardly anyone buys a PC box these days. Everyone buys laptops, and AMD's strategy there is just as weak. Reply
  • ThreeDee912 - Tuesday, January 14, 2014 - link

    They tried to get OEMs to put Llano chips into "thin and light" laptops, but Intel kind of beat them with their Ultrabook marketing.

    At least AMD kind of "won" the console wars by getting their CPUs into both the PS4 and XBone.
    Reply
  • xdesire - Tuesday, January 14, 2014 - link

    Sorry but i read it like this: this is another piece of sht hardware which is YET another disappointment for their fans. I owned many of their CPUs GPUs and stuff but enough is enough. They have been laying their a**es off for SO long and couldn't even make an improvement on their crap stuff. So, is this THE Kaveri we were promised for so long? I supported them in their worst days by buying their products, hoping to see them come back in the game BUT no, they are being lazy and don't improve sht.. Reply
  • jabber - Tuesday, January 14, 2014 - link

    Dear AMD marketing Dept, the above post signifies what I said in the last part of my last post.

    This is not the market/customer you are looking for!
    Reply
  • jnad32 - Tuesday, January 14, 2014 - link

    Actually a 30% performance improvement seems pretty amazing to me. Also please try and remember that all these tests are done with very early drivers. We all know AMD takes forever to get there drivers in line. I wouldn't personally worry about numbers for the next couple of months. BTW, what were you expecting from an APU? Core i5? HA! I am a massive AMD fan, but we all know that wasn't even possible. What I really want to know is where is my 8 core Steamroller chip. Reply
  • JDG1980 - Tuesday, January 14, 2014 - link

    I was hoping for IPC in line with at least Nehalem. The low IPC is really killing the "construction equipment" cores, and it's increasingly looking like an unfixable problem. If Steamroller could have brought ~30% IPC gains as was initially rumored, then that would have been a good sign, but at this point it seems they'd be better off taking their "cat" cores and scaling them up to desktop levels, and dropping the module architecture as a failed experiment. Reply
  • silverblue - Tuesday, January 14, 2014 - link

    A "construction equipment" (thanks) module actually gets an impressive amount of work done when taxed. The concensus has been to make software think a module is a single core with HT. I imagine that the cores will be fed better in single threaded workloads in that circumstance.

    I also imagine that a heavily threaded workload will extract the very best from the architecture now the MT penalty is gone.

    One question about the review scores - all the testing was done on Windows 7 64-bit SP1 with the Core Parking updates applied. Would using Windows 8 or 8.1 make any real difference to the results or would it just benefit both AMD and Intel?
    Reply
  • jaydee - Tuesday, January 14, 2014 - link

    I just don't "get" adding three $300 Intel CPU's to a review of 3 sub $200 AMD CPU's. We all know, or can find out how i7 SB, IB, Haswell compare to each other. I can see adding one of these CPUs to show a baseline of how AMDs top-of-the-line compares against Intels (albiet at very different price points), but having all three of them gives the impression that you want to make sure everyone knows who's boss...

    Is there going to be an update on power draw? I'm really curious to see what the 45W Kaveri draws (idle and full power) considering it is so competitive with the 95W Kaveri.
    Reply
  • jaydee - Tuesday, January 14, 2014 - link

    I stand corrected, four $300+ Intel i7 CPUs in this comparison, not three. And two Intel CPU's that are actually in the price range of the AMDs. It feels as if to do this review, AT just cobbled together whatever was laying around instead of being intentional about putting together the best test bench possible to compare the review product against its real competitors.

    It's really a shame, because the commentary, the in-depth look at the architecture and the conclusions are outstanding; no website out there has tech writers as good and do as thorough of a job as Anandtech. The fact that the test benches are just after-thoughts in some of these reviews are really disappointing.
    Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    I think this was mentioned somewhere in the beginning of the review. Intel likes to have the i7 reviewed instead of other parts, so the send the i7 CPUs to everyone. This is kind of like what the car makers do. Most people just buy a reasonably priced, mid-spec car model. However, the journos always get to review these ridiculously over-optioned cars, that hardly sell, like the $35 Ford Fusions, even though the base car sells for 10 grand less. Reply
  • jaydee - Tuesday, January 14, 2014 - link

    I am well aware that when having the choice, Intel would rather you review a $340 Intel CPU against a $179 AMD CPU. But is there not any way, given the ad revenue of Anandtech, to obtain a $190-200 mid-range Intel i5 CPU (such as the i5-4440) in order to have a relevant test bench for an eagerly anticipated AMD mid-range CPU launch? Reply
  • srkelley - Tuesday, January 14, 2014 - link

    "...do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower?"

    Yes, oh yes! I'm letting the APU be the core of my system build in a few months and plan to upgrade as needed. I'd like a simple solution like an APU instead of having to go with a discrete card right away. If it lets me spend more on ram and other things, keep the psu and power draw low I'm happy. The most demanding game that I play right now is the Witcher 2. Eventually I will have to go discrete for Star Citizen and the Witcher 3 but if I can get solid enough results with a high powered apu then I'll simply jump to that instead.
    Reply
  • Conduit - Tuesday, January 14, 2014 - link

    This has been a long time coming thanks to Always Major Delays (AMD). Even know they can't get their sh!t together. Reply
  • nissangtr786 - Tuesday, January 14, 2014 - link

    http://techreport.com/review/25908/amd-a8-7600-kav...
    The fpu still not improved miles still behind intel.
    http://browser.primatelabs.com/geekbench3/326781
    http://browser.primatelabs.com/geekbench3/321256
    Reply
  • A5 - Tuesday, January 14, 2014 - link

    "do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? "

    Maybe in the context of a Steam Machine? But for my main gaming PC, no way. Maybe something they can try out after the next die shrink if SteamOS really takes off.
    Reply
  • Xajel - Tuesday, January 14, 2014 - link

    I believe the main reason for AMD to not have a new FX is SOI process.. it was okay in it's glory days, but it can't keep it up with bulk silicon...

    They want to increase the clock at least but they're trapped with 32nm SOI, and why not moving toward 28nm or lower SOI because I think that AMD is already working to convert it's entire CPU's to Bulk Silicon, so it's not logical to make a new design for still not ready yet 28nm or lower SOI while they are already designing the new core for bulk silicon.. knowing that both SOI and bulk require a complete redesign of the silicon just to convert from one to another... so it's not even logical to convert Pilediver or maybe even Steamroller to bulk silicon for only one year while they're working on the next architecture which will be Bulk Silicon...
    Reply
  • jimjamjamie - Thursday, January 16, 2014 - link

    That makes sense, I was very confused as to why AMD were not going to refresh the FX line - even if HSA is the future for AMD, I presumed new FX sales throughout this year would have helped things along.

    Perhaps it wasn't worth the cost, but at the same time AMD could really do with keeping the fanboys on side.
    Reply
  • jjj - Tuesday, January 14, 2014 - link

    lol now that AMD is paying you,you jumped at the other extreme , what do you get over Richland and how does that deserve a positive conclusion? (and maybe a reminder is needed A10-6800K is 140$) Reply
  • Drumsticks - Tuesday, January 14, 2014 - link

    The author recognized that 100W TDP isn't really any different...? The 45W more mainstream range is where things are a lot more interesting, but please, ignore the 30+% gains. Reply
  • Conduit - Tuesday, January 14, 2014 - link

    I have been waiting almost 2 years for a mobile Kaveri telling myself the wait will be worth it. Looks like it actually may be as Kaveri performs better at lower TDP's than Trinity and Richland. Reply
  • Hubb1e - Tuesday, January 14, 2014 - link

    This guy actually gets it. These should make very nice cheap laptops able to actually play a game or two without vomiting all over themselves. At 95W the gains are not there, but at 45W it's impressive Reply
  • JDG1980 - Tuesday, January 14, 2014 - link

    What an odd choice of benchmarks. Why not use Photoshop (which supports OpenCL and is an important real-world application) instead of Agisoft (who?) I can't help but wonder if Agisoft paid for their inclusion in Anandtech's benchmark suite. Reply
  • ddriver - Tuesday, January 14, 2014 - link

    You may notice that the entire review is not exactly OpenCL compute rich... I mean even for gaming cards AT usually includes several OpenCL benchmarks, but not for this product, where it is supposed to be its strongest point??? Reply
  • Conduit - Tuesday, January 14, 2014 - link

    I have to say, the 45W A8-7600 kicks ass. It's competitive with the 100W A10-5800K, I think that's where the benefit of Kaveri lies, in the low TDP department. Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    Indeed. However, I personally would like to see benchmarks that measure the power draw. The 65watt TDP Richland A10s were nearly as power thirsty as the +100watt TDP parts. A8 does look interesting. Reply
  • thegreatjombi - Tuesday, January 14, 2014 - link

    Any chance we can get HD5200 and HD5400 benches? The 5000 series has started to catch up with AMD in terms of GPU Compute. I am curious if AMD has left Intel in the dust once again, or has Intel actually made a big enough leap to stay in the game. Reply
  • thegreatjombi - Tuesday, January 14, 2014 - link

    Oh! It seems the 5000 series is just omitted in the CPU Performance benchmarks? Why is that? Reply
  • Fox5 - Tuesday, January 14, 2014 - link

    Where are the Iris Pro results in CLBenchmark? Where are the CPU results of CLBenchmark; is the GPU faster than Haswell's AVX2? Where's the rest of the compute benchmarks, the area that Kaveri is supposed to shine in? Reply
  • JDG1980 - Tuesday, January 14, 2014 - link

    Incidentally, will HTPC be covered in a different review? MadVR could be a good use case for Kaveri, as it requires quite a bit of shader power but isn't that memory bandwidth intensive. Reply
  • beomagi - Tuesday, January 14, 2014 - link

    Per charts, Why are 100W APUs slower in 1280x1024 than at higher resolutions?? Reply
  • beomagi - Tuesday, January 14, 2014 - link

    Also, 45W APUs are faster? Are the benchmarks different? The charts only mention resolution. Reply
  • beomagi - Tuesday, January 14, 2014 - link

    Nevermind - I now see this is as a percent difference compared to the slower chip - the title said FPS and that threw me off.
    Ignore! :D
    Reply
  • Dribble - Tuesday, January 14, 2014 - link

    Call me cynical but it's just the same as the previous gen. If you want a small form factor extreme budget gaming box these will be pretty good. For the rest of the world if you don't care about games you'd do better going Intel, and if you do intel + a proper graphics card.

    As with previous gen it comes with a load of marketing slide advantages, which if previous gen are anything to go by will come to nothing - I don't see the current range of AMD machines blowing away intel machines with opencl/stream/fusion/whatever - and that was what was on the previous set of marketing slides.

    I always thought their best bet was mobile - but these days that markets getting really tough now for AMD as Intel have just spent the last few years optimising power usage.
    Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    The biggest elephant in the room is that very few average people (those who don't visit this web site) care for playing games on laptops (or even desktops, considering the consoles). Once you ignore the gaming performance, the A10 APU effectively has the performance of Core i3, but at a high price. A Fry's or Best Buy "special" laptop with Core i3 can cost as low as $400 or less. But the A10 laptops cost around $500. Intel's pricing is pretty aggressive on the low end IMHO. Reply
  • jimjamjamie - Thursday, January 16, 2014 - link

    Not just in laptops, the price/performance ratio of the dual-core Pentiums is extremely good - the Haswell-based Pentium G3220 (3GHz dual core, 3.5MB cache) is available in the UK for just over £40, which is excellent value. Reply
  • Nagorak - Wednesday, January 15, 2014 - link

    Yes, the issue is that the hybridization of CPU/GPU really provides no advantages. For someone actually playing games the GPU is still too weak and they'd be better off with a discrete card. For someone not playing games the quality of the integrated GPU doesn't matter.

    Maybe I'm wrong and there are tons of people out there playing games at ~30 FPS with low settings. I just don't see why someone who wants to play games wouldn't try to cough up an extra $100 for a discrete GPU, and if you don't play games then even Intel's older HD GPUs are fine.
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    *For someone actually playing [newer 3D intensive] games the GPU is still too weak and they'd be better off with a discrete card. Yep
    *For someone actually playing [older or lighter] games the GPU is good enough and you end up with a cheaper overall package without needing a discrete card.
    *For someone not playing games, they will benefit big time from HSA eventually. Not there yet and depends on the software.

    There are probably more people in the last two categories if you think about it. AMD isn't for us gamers right now unfortunately. And it's going to take a while for adoption for HSA to bring in the third category of people.
    Reply
  • srkelley - Wednesday, January 15, 2014 - link

    Some of us don't have a lot of money. It may be only $100 but it could mean giving up on buying games just to get a more powerful rig that can't play... games (that you would be willing to buy anyway). I'd like to buy/build a system that's great all around, I'd love to have the very best. The very best is overkill and some systems while not the best are still a bit too pricey.

    This APU will meet nearly all of my needs and exceed quite a few of them. I'm fine with not being able to max out my settings in every game, I'll be able to max most of them out at 1920x1080 @60 fps (or higher) with this apu. If the improvements in hybrid crossfire pan out (due to decreased latency and improvement in implementation) - I may be able to buy a super cheap gpu at a much later date to cover the rest of what I would need it for. I'll also be able to cool my system for a lower price (or just use stock everything).

    Even if cf is a bust, the core product let's me spend a bit more on memory, better storage, a better case and even pick up a few new games without breaking my budget (if I sell my current monitor for asking price, I may be able to use it subsidize a slightly better one at the same time!). Right now I use a netbook, a smart phone and a pc that would struggle to play the original Assassin's Creed at 30fps with a 640x480 screen resolution. The monitor, speakers and wireless keyboard set is the only nice thing about my set-up. Time to step into the future - it's more than good enough for me and people like me.
    Reply
  • Dribble - Thursday, January 16, 2014 - link

    You can buy a cheaper non-apu cpu and discrete radeon graphics card for about the same amount and that would play games better and do everything else just as well, and still give you all the AMD advantages (mantle, etc). Would also give a better upgrade path as when in the future you are better off you can more easily upgrade cpu or graphics to something faster.

    The APU only really wins when you need it in a very small box too (so no room for discrete graphics)
    Reply
  • Nagorak - Wednesday, January 15, 2014 - link

    Well, I will say that one place that this may actually be worthwhile is in an HTPC with a tiny case that can't even take a half height graphics card. With anything that can take a half height graphics card it's moot. Reply
  • mikato - Wednesday, January 15, 2014 - link

    But why do you need a discrete card with an HTPC? You don't. Reply
  • rpsgc - Tuesday, January 14, 2014 - link

    No gaming benchmarks with dGPU? (I'm not talking about Dual Graphics.) Some people are interested on how this performs purely as a CPU, seeing as AM3 is apparently dead. Reply
  • YuLeven - Tuesday, January 14, 2014 - link

    Well, it costs US$173. You can buy a Pentium G + Radeon HD7770GE for US$175~ on Newegg today. One set gives you all-in-one subpar 720p gaming, the other goes as up as 1080p in many titles. In fact, it's nearly cruelty comparing a HD7700GE to Kaveri's GPU.

    Well, unless you desperately need a one-chip solution, I see Kaveri as being utterly pointless as far gaming is concerned.
    Reply
  • Ian Cutress - Tuesday, January 14, 2014 - link

    It will be the focus of my next Gaming CPU article update; retesting over a dozen CPUs for this review, at 30 hours each, wasn't an easy task. Reply
  • YuLeven - Wednesday, January 15, 2014 - link

    Super, Ian! I can't wait for it. Reply
  • srkelley - Wednesday, January 15, 2014 - link

    Thanks for your hard work, you and Rahul did a great job! I always love reading the reviews at Anandtech. They're very informative, easy to read and leave me with a much, much better understanding of whatever they're covering when compared to how I went into it. Reply
  • Drumsticks - Tuesday, January 14, 2014 - link

    Any chance you can make a summary of the 65 and 45W APUs vs an i3-4330 at 54W? That would be a more apt comparison especially considering the price range. Reply
  • takeship - Tuesday, January 14, 2014 - link

    Also, any chance that power consumption numbers are going to be added to this review? Looking around the rest of the internet, it would appear than AMD's idea of 45W/65W don't really jive with the actual consumption figures. I.e. their 45W chip is actually pulling closer to 80W under load, and doesn't make them even perf/watt comparable with an i3. Reply
  • silverblue - Tuesday, January 14, 2014 - link

    I saw similar figures, but that was for the system as a whole. Reply
  • Traciatim - Wednesday, January 15, 2014 - link

    Keep in mind that measuring 80 watts from the wall through an 80% efficient supply has the whole machine consuming 64 watts. Reply
  • aryonoco - Tuesday, January 14, 2014 - link

    "...do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower?"

    Definitely yes, for a low-end steam machine for casual gaming in the living room.

    The only way to hit the $400 price for a steam machine is to forgo the dedicated graphics card and use an APU/SoC. But none of these Kaveri SKUs, even the highest end A10 one, would cut it. I do not expect such an APU to beat $200 dedicated cards, but it needs more performance than this.

    If AMD could offer a Kaveri APU with say 30% more GPU power and the same CPU power, for around $200, I think it would be a very attractive option for a steam machine.

    Of course that would require AMD to finally start making respectable Linux drivers as well... which probably means it won't happen.
    Reply
  • Nagorak - Wednesday, January 15, 2014 - link

    I'd wait and see if the Steam OS even goes anywhere. Odds are strongly against it. Reply
  • nemesis1985 - Tuesday, January 14, 2014 - link

    1680x1050

    A10-7850 = company of heroes --- sleeping dogs --- f1 2013
    fps ====== 35 ======= 21 ====== 38 =

    A10-7600 = company of heroes --- sleeping dogs --- f1 2013
    fps ====== 61 ======= 52 ====== 42 =

    1920x1080 = same (problem)

    and says kaveri have only 2 cores

    or its just a misunderstanding by me

    http://images.anandtech.com/doci/7677/G2%20-%20100...

    http://images.anandtech.com/doci/7677/G9%20-%2045W...
    whats wrong with this bench
    Reply
  • nemesis1985 - Tuesday, January 14, 2014 - link

    beomagi -
    Nevermind - I now see this is as a percent difference compared to the slower chip - the title said FPS and that threw me off.
    Ignore! :D

    yeah me too :D
    Reply
  • nemesis1985 - Tuesday, January 14, 2014 - link

    misunderstanding (very sorry) please delete my comment Reply
  • kwrzesien - Tuesday, January 14, 2014 - link

    Pretty disappointing.

    And what's the point of testing 1080p gaming at top settings? Does it even matter who is better when below 15 fps? Why not find a setting for each game in 1080p where at least one solution is over 30 fps average - that might show something interesting.
    Reply
  • mavromanitari - Tuesday, January 14, 2014 - link

    Anandtech Logic:

    Let's match up a 170$ amd APU vs the intel IGP flagship (4770R) that costs 392$...

    Nah still beats it at high resolutions, we need to do more! Oh well lets put the 320$ 4770K and throw 100$ on a dGPU...

    We have done it!! Yay the 420$ combination beats the 170$ APU....
    Reply
  • YuLeven - Tuesday, January 14, 2014 - link

    So a cheap Pentium / Athlon CPU paired with a dGPU costs less and gives more gaming performance. Yaaaaaaaaaawn AMD. Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    yeah, I agree, these type of benchmarks doesn't paint a proper picture in the same category. It's like a Ferrari vs Honda Civic. The Ferrari wins, but cost a lot more. At least have a price to performance benchmark, which is much more useful. Reply
  • Bob Todd - Tuesday, January 14, 2014 - link

    If it wasn't clear from the article, the dGPU setup doesn't need that 4770K to be the fastest. You could buy a cheap dual core Pentium + 7750 for under $170 and still be in the same position. The only good news I see from a quick scan through the article is the A8-7600 @ 45W, mostly because there may be some hope of decent mobile parts that can do 720p gaming. Reply
  • YuLeven - Tuesday, January 14, 2014 - link

    In fact, as of today you can buy a Pentium + HD 7770GE for US$175~ on Newegg. Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    if Mantle and True Audio takes off, I think the apu would be a better deal. Plus you get a smaller form factor and better support for the new technologies. Reply
  • YuLeven - Tuesday, January 14, 2014 - link

    The HD 7770GE is a GNC part, so it has mantle. This renders the APU even less compelling. Reply
  • Bob Todd - Tuesday, January 14, 2014 - link

    As YuLeven already pointed out, you can get much faster GCN dGPUs for the same price. But I'll go ahead and discredit the 'smaller form factor' advantage. AMD is nearly non-existent in mini ITX. There are a whopping 2 FM2+ mITX boards on Newegg (and only 2 more FM2). It's so bad that Newegg doesn't even expand the "Form Factor" filter by default for AMD, but they do for Intel (who has 24 mITX boards just for LGA 1150).

    Like I said, I have some (tiny) hope for mobile Kaveri to enable 720p gaming in a portable package, but even there I am worried about idle power consumption compared to Haswell. For desktop, Kaveri looks like a bust since even niches like htpc gaming aren't in the realm of possibility at 1080p.
    Reply
  • drezden444 - Tuesday, January 14, 2014 - link

    Well. Good step forward, but yet a small step. The part about 45W improvement is really a good one. And obvious AMD can have an excellent notebook APU. I think we may have 35W 2.8-3GHz notebook A8 APU. Question. What about overclock. I'll be delighted to know whether this CPU has a potential. And keep in mind that I may buy 760K(overclock it between 4.5-5.1GHz) and 7750(or even maybe 7770) for the same money. So, what about this $ numbers. Reply
  • YuLeven - Tuesday, January 14, 2014 - link

    What I want to see on Kaveri's mobile appearance is power consumption. Trinnity had a decent GPU, but the designs it won gave you pathetic 3 hour of WiFi battery life. Trinnity ULV was awfully slow and nowhere close to Haswell's battery offerings. If Kaveri keeps like this, it will end again only inside boring, thick, hot and loud budget 'gaming' notebooks. Reply
  • drezden444 - Tuesday, January 14, 2014 - link

    True! But I'm really impressed by a6-5200 bobcat 25W. This is a CPU with a potential bigger than Kaveri. I'm kinda thinking, if they just merge both APUs into 1 much better. And still, Kaveri 35W 2,8GHZ for budget 'gaming' laptops will be a good deal for gaming at 1366*768. Reply
  • drezden444 - Tuesday, January 14, 2014 - link

    AMD is very important. Without AMD, Intel was still going to sell us those crappy P4 CPU for 1000$ for a single core unit. Well, please do something - be better. 2 years you can't come out with a better product than old Phenom CPU. I'm doing my job to help you - I'm building myself an AMD PCs - always, but if I could afford a second computer, it was going to be an Intel build. And just to finish. Now I'm buying myself FX6300+7770, just for few dollars more than your Kaveri APU, really disappointed for not seeing this Steamroll cores in FX AM3+ version. Reply
  • Nagorak - Wednesday, January 15, 2014 - link

    AMD may be getting close to being done. They haven't been competitive in a long time. Even now the place where this sort of integrated GPU would be helpful (mobile) they aren't competitive. In their infinite wisdom they sold off their mobile graphics division to Qualcomm, and it's now doing pretty damn well.

    A company can only make so many boneheaded moves before giving up the ghost.
    Reply
  • Da W - Tuesday, January 14, 2014 - link

    Here's what goes into my old Phenom X3 HTPC.
    Say hello to a new steambox used for streaming from my beast.
    Reply
  • Da W - Tuesday, January 14, 2014 - link

    I personnaly have an i7 4770K and my dick is not any bigger. Take that intel fanboys! Reply
  • silverblue - Tuesday, January 14, 2014 - link

    There's a few things that can be inferred about Kaveri. The first is that there's little point going above 65W, let alone 45W, unless minimum framerates are the most important thing to you (Kaveri suffers here anyway, though at unrealistically low rates). Secondly, single threaded performance is still poor - Steamroller's main benefit is removing the MT bottleneck and it appears to have succeeded, but the hope was for a bit more. Finally, having 512 shaders means little over 384 at the moment (drivers and bandwidth permitting).

    It may be worth revisiting these results in a few months when some of the benches can benefit from HSA, and TrueAudio and Mantle are in use along with more mature drivers, but regardless, the A8-7600 looks like the sweet spot. I had hoped for more on the CPU front, but I guess that's yet to come. A 45W Carrizo built using high density libraries would probably equal the 7850 at the very least. A 95W Kaveri with lower clocked CPU cores and 768 cores would be very interesting indeed.

    I think I can see why there are no FX CPUs planned, but we don't know the benefit of L3 on performance without them.
    Reply
  • JDG1980 - Tuesday, January 14, 2014 - link

    The GPU is pretty clearly bottlenecked by memory bandwidth. It doesn't matter how many shaders they throw at it as long as they're stuck on DDR3. They need to either move to DDR4/GDDR5 as system memory, or add extra cache RAM on-die. Until then, iGPU performance won't get much (if any) better. The extra shaders will still help in GPGPU applications, though. Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    To me, this APU is perfect for SteamBox. GPU centric over CPU, dedicated Audio processor, Mantle, allows for small form factor. The chip itself sounds like a console. Reply
  • Krysto - Tuesday, January 14, 2014 - link

    HSA seems like a nice architecture, but I wonder what's in it for the ARM chip makers? Why would they standardize a compute architecture like this around a pseudo-ISA, when they could all do it on top of ARMv8, and it would probably greatly simplify things for them and with better performance, too?

    Is it because they hope that HSAIL being supported by Microsoft will eventually make it possible for new apps to run on both x86 and ARM? But Microsoft has already done that with WinRT.

    So I don't get it. Why would ARM and its ARM partners want to be involved in this, when they could build their own HSA-alternative and they wouldn't have to drag AMD along with them, which let's face it, are not successful in PCs and inexistent in mobile.
    Reply
  • codedivine - Tuesday, January 14, 2014 - link

    HSAIL is a pseudo-ISA not only for the CPU, but also for the GPU. There is no standardized pseudo-ISA for the GPU other than HSAIL. ARMv8 only applies to the CPU side. Reply
  • davio - Tuesday, January 14, 2014 - link

    a 7 yr old asus motherboard with 4gb ram and athlon 64 x2 2200+ with a 1gb bfg GTS 250 graphic card. UPGRADING : would a new fm2+ board with an A10-6800K be equivalent with graphics for ADOBE and AUTOCAD...(NO GAMING...EVER) OR would the new a10 kaveri be better ? Reply
  • just4U - Wednesday, January 15, 2014 - link

    Well power wise the $120 cpu would likely be the route to go.. You sort of missed the curve really.. since the leaps in performance on the C2D in 2007 (09 for AMDs PIIs) really made the old X2s obsolete... so yeah in that sense excellent upgrade. Video wise? Hummm.. I want to say it's going to be better.. but not sure by how much. Reply
  • tipoo - Tuesday, January 14, 2014 - link

    What's up with the Iris Pros bottom score in the Bioshock Infinite minimum FPS test, where it did well in the average FPS? Does it have framerate drops? Reply
  • Nova17745 - Tuesday, January 14, 2014 - link

    So the minor improvement of integrated graphics makes this all ok? I'll keep my i7-4770k Reply
  • just4U - Wednesday, January 15, 2014 - link

    Yes.. and I'll keep my 2700K rather than go for your i7-4700K as I think it's within a few percentage points of yours.. but can maintain higher over clocks (if I wanted to oc) due to temperature issues surrounding the temperatures issues of ivyb/haswell but if I wanted to build a lite streaming box for the TV room (no games..) and not spend a lot this would be a excellent option. Reply
  • jrs77 - Tuesday, January 14, 2014 - link

    I couldn't care less about the iGPU tbqh, as the majority of the professional software I'm using doesn't make any use of it, but relies purely on the CPU.

    I'd like to see more reallife benchmarks, as they would show that Kaveri isn't at all too be recommended, if you're working with your PC.
    Image editing (Adobe Ps), image rendering (Cycles, LuxRender, VRay), office (MS, OOP, Libre), music-editing (FL Studio, Reason, etc). All this sofftware is heavily used by alot of homeusers, and all of this software doesn't make any use of GPGPU or OpenCL basically.

    Kaveri is good foor a cheap HTPC, but so is a cheap intel i3.

    Everyone who is building a PC for playing games would never consider doing it without a dedicated GPU, so why should we care about these gaming-benchmarks on the iGPU?

    I was hoping that AMD improves the CPU-part, but seeing that my old i5-3450 is still crushing the A10-7850k in CPU-performance is actually a rather sad story for AMD. In Cinebench R15 it's allmost 100 points difference between the two, or 25% if you like, and both chips cost the same, while the intel i5 is much better in energy-efficiency.
    Reply
  • YuLeven - Tuesday, January 14, 2014 - link

    You Core i5 crushes A10-7800K's performance CPU wise costing the same, a cheap Athlon or Pentium CPU paired with a HD7770GE wipes the floor with A10-7800K's gaming performance, also costing the same (just checked the prices at newegg).

    It's all about picking it right. If you go with this APU, you are stuck in the middle. So-so CPU performance, so-so 720p gaming.
    Reply
  • codedivine - Tuesday, January 14, 2014 - link

    Blender Cycles is getting OpenCL acceleration. I believe Apple is helping out with that. And LibreOffice folks also showed OpenCL acceleration. Reply
  • jrs77 - Tuesday, January 14, 2014 - link

    Blender Cycles getting some OpenCL love is nice, but it's not actually the most relevant renderer. I'm either using V-Ray or LuxRender most of the time, and the V-Ray 2.0 RT is a joke, as it can't render scenes using the chosen textures, but only renders standard-materials, just like any other GPU-based renderer currently.

    And I don't think that the intel iGPU is much slower for GPGPU, especially knowing that it's faster in video-encoding than the AMD iGPU, so the problem oof the bad single-threaded performance still speaks against AMD.

    The smaller 45W Kaveri-chips are much more interesting imho, especially for HTPCs, as they can be cooled passively and will be dirt-cheap at $50 a pop.
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    Well come on you're the one that mentioned it to begin with, and LibreOffice too. I imagine these will show substantial improvements. Reply
  • chizow - Tuesday, January 14, 2014 - link

    Shocking that Kaveri is slower than Richland in many CPU-related tasks, it's as if AMD is pulling a Bulldozer all over again. Wasn't Steamroller supposed to redeem the whole "Construction" line of CPUs? It's no wonder they decided to cancel FX for the desktop, it'd be Bulldozer all over again. Reply
  • JDG1980 - Tuesday, January 14, 2014 - link

    Steamroller offers about 10%-15% better IPC in integer benchmarks, and reduces the multi-threading penalty. The problem is that Kaveri has to run at a lower clock speed, because Global Foundries can't get their act together on 28nm. As a result, you've got 3.5 GHz Steamroller cores on Kaveri competing with 4.1 GHz Piledriver cores on Richland, and the IPC improvement isn't good enough to make up for that (especially on floating point). Reply
  • silverblue - Tuesday, January 14, 2014 - link

    The other problem is, they streamlined the FPU to remove an MMX pipe, and... that's it. Still, it reduces power usage.

    Following AMD's own Opteron roadmap, Steamroller was meant for greater parallelism, nothing said of IPC - that's for Excavator. A new FPU, perhaps? I'd certainly like to see larger cores than currently, or a smarter way of dealing with the workloads in single threaded work on a modular basis.
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    I'd be happy with even just 6 and 8 core Steamroller chips. FX is dormant for now? Nothing appealing to gamers at all. It seems like a race against the clock with HSA - that stuff is needed ASAP. Ok, if Excavator improves IPC (and hopefully they can increase the clock speed again) then it had better get here soon as well. Reply
  • iisdev - Tuesday, January 14, 2014 - link

    Thanks for the detailed review.

    I was hoping that Skyrim would be included in the performance tests; Intel had used it specifically to demonstrate Haswell's IG capabilities and it would have been great for a comparison.

    It doesn't sound like the A10-7850K would be capable of Skyrim on Maximum at 60fps.(either at 1080p or even 720p) Is that a correct analysis?
    Reply
  • neogodless - Tuesday, January 14, 2014 - link

    Is the A8-7600 identical to the A8-7600 except for the differences?

    A8-7600 2M/4T 3.3 GHz
    3.8 GHz 2133 65W R7 384 720 MHz

    A8-7600 2M/4T 3.1 GHz
    3.3 GHz 2133 45W R7 384 720 MHz
    Reply
  • nightbringer57 - Tuesday, January 14, 2014 - link

    "Is the A8-7600 identical to the A8-7600 except for the differences?"

    I think this sentence is so aweome I'm gonna print it and read it aloud ten times a day before waking up :D
    Reply
  • neogodless - Tuesday, January 14, 2014 - link

    :) I hadn't read everything and later realized they just had different TDP configurations. Oops!

    Though later they say they "also took a few AMD processors of varying TDPs:" and then show a chart of Intel processors.
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    I didn't realize Yogi Berra was into computers. Reply
  • Gadgety - Tuesday, January 14, 2014 - link

    Excellent review. I particularly appreciate the comparison of the much more expensive Iris Pro, and the dGPU. Hope you incorporate an update with 2400 memory modules. That said I think I saw somewhere 65W version 7800 with the 512 GPU stream processors. Hopefully that one will be launched to enable a truly SFF family PC capable of some gaming. Reply
  • Krysto - Tuesday, January 14, 2014 - link

    Any chance we could see Nvidia adopt HSA in the next 2 years? I think they're just being a pain in the butt. ARM competitors don't seem to have a problem with supporting HSA. What's Nvidia's deal? They seem to always be going for proprietary solutions instead of supporting a standard. They don't even support OpenCL in Tegra K1, but support CUDA 6.

    I think I even read an article about the history of OpenGL vs DirectX that said the OpenGL coalition started failing when Nvidia kept rolling their own proprietary extensions, and then it just gave Microsoft the opportunity they need it to take over with DirectX. I want to like Nvidia but they are making it so hard to like them.
    Reply
  • mr_tawan - Tuesday, January 14, 2014 - link

    I'd love to see how it works with another dGPU, like R7.

    Also, it might be interesting if we could utilize GPU in the APU and the GFX card at the same time, for different things. For example, the GPU in APU works on the OpenCL/Computing stuffs while the dGPU works on the graphics exclusively.

    It looks like many AT readers do not realize how significant HSA has brought. Well based on the current software ecosystem we don't see any advantage over the ordinary CPU+GPU combination. AMD have to work really hard to get developers on board, otherwise they are done for.
    Reply
  • iTzSnypah - Tuesday, January 14, 2014 - link

    Why didn't you include any memory overclocking? The only thing I am interested in knowing is if AMD has fixed their controller so you can actually run high speed RAM (2400+). Reply
  • schizoide - Tuesday, January 14, 2014 - link

    I'm frankly astonished they DIDN'T introduce essentially the PS4 APU with Kaveri. CPU actually doesn't matter much for gaming and a pricepoint around $300 would be no problem at all. I'd buy one and build a HTPC/steambox around it in a hot second.

    Very disappointed that AMD didn't do this. It seemed like such an obvious path.
    Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    I think it would be a different story if they showed some really HSA or Mantle programs, but they launched a product where the big selling point is HSA and Mantle. Mantle would have been a big win for them in battlefield 4, but that got delayed and there no HSA programs. I think next year they will be in a much better position. Reply
  • schizoide - Tuesday, January 14, 2014 - link

    If mantle ends up being meaningful, and if HSA gains are tangible, perhaps. But even with a 45% speedup Kaveri can't compete with the ~7870 in next-gen gaming consoles, and that renders it unacceptable as a gaming solution. Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    I agree for hardcore gamers, but I believe ExtremeTech showed a chart saying that 1/3 of all gamers are using Intel chips with iGPU for gaming, next would be intel chips with mid level Nvidia/AMD dGPUs. I think this chip is for those 2 crowd. The apu won't beat the mid to upper crowd and it's really not designed for them. Reply
  • schizoide - Tuesday, January 14, 2014 - link

    Don't see how, as it doesn't remotely compete with even <$100 discrete GPUs. It's simply not a competitive gaming solution at any level past facebook games and ports from the previous console generation. Reply
  • silverblue - Tuesday, January 14, 2014 - link

    A bit unfair considering the chip just launched and drivers will improve in time. What's more, if we were to assume the GPU was equal to, say, an $80 card, this means a $90 CPU, and how many $90 CPUs will realistically beat this?

    The real disappointment here is not seeing what the new APIs really do for Kaveri, so we're left guessing.
    Reply
  • jrs77 - Tuesday, January 14, 2014 - link

    AMD X4-760k + HD7750 1GB GDDR5 will beat the Kaveri at every task costing the same. jfyi Reply
  • takeship - Tuesday, January 14, 2014 - link

    Most Celeron, all Pentiums based on Sandy Bridge will own Kaveri at all but heavily threaded tasks. As above, a HD7750 or R7 240, especially if it's GDDR5, will slaughter Kaveri in FPS. Reply
  • thomascheng - Wednesday, January 15, 2014 - link

    I think when the updated drivers (end of this month?) are release that might be different. Another thing about the APU, is that it is a bit more future proof with OpenCL becoming more popular. I personally won't get one now, but I might bite once there is a price drop to $160. Reply
  • YuLeven - Wednesday, January 15, 2014 - link

    I don't put any faith on drivers. If a driver improves Kaveri's GCN 1.1 GPU, it will likely do the same for a GCN 1.0 dGPU. The cheap Pentium + dGPU is still a Kaveri killer.

    What I wanted to see is dual graphics working to the point where Kaveri + very cheap dGPU would beat a Pentium + cheap dGPU. Perhaps in some of this scenarios Kaveri would make a great choice.

    However dual graphics are so broken that for now I can only but hope. In some cases I personally saw a dual graphics solution show higher FPS, but look actually choppier due bad cases of micro stuttering.
    Reply
  • mr_tawan - Wednesday, January 15, 2014 - link

    AFAIK in my country (Thailand), APUs are quite popular in internet cafe's gaming machines. They are cheap, yet powerful enough driving online games. This might also be the case for countries like China or Korea where internet cafe also provide gaming services, I guess.

    And that might be how 1/3 machine figure come. I guess.
    Reply
  • silverblue - Tuesday, January 14, 2014 - link

    You're talking a GDDR5-based PC - as nice as that sounds, it's totally at odds with current system setups. In addition, would you really want eight Jaguar cores? Kaveri would outperform them in its slowest 2M/4C configuration, and single threaded speed would be significantly slower with a 1.6GHz Cat core.

    Or you could just buy a PS4... helps AMD either way, and you get what you want.

    Until we see Mantle, TrueAudio and HSA in action, Kaveri will remain underwhelming to most. It's kind of strange to bring out the hardware without the software.
    Reply
  • rauelius - Tuesday, January 14, 2014 - link

    This would help make an AMAZINGLY tiny Steam Machine....I just hope the drivers are there Reply
  • nathanddrews - Tuesday, January 14, 2014 - link

    Would it? That probably depends on what kind of Steam machine you mean. It can't play games very well on its own and would be more power than you need to stream from a more powerful machine. Seems like at best it would be an entry-level Steam machine for 720p gaming... and even then it would cost about the same as a cheap Intel CPU + low-profile 7750, which would provide better gaming performance.

    If you were only going to stream games from another machine, the NVIDIA Shield would be a smaller and cheaper alternative that you can take anywhere in the house.
    Reply
  • veri745 - Tuesday, January 14, 2014 - link

    FYI, the "Average" framerate button for the Bioshock Infinite "Xtreme" settings benchmark actually brings up the chart for the "Performace" settings. Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    Outside of the "grandpa gamer" market, which is tiny, the APU so far seems to induce a big yawn. The HSA and Mattle benefits won't be mainstream for quite some time, so they're not a consideration for someone buying an APU this year. AMD promised originally 20-30% improvement in both CPU performance and GPU performance over Richland A10. In this sense, the new Kaveri A10 APU is a disappointment, in many tests struggling to beat Richland parts by any meaningful margin (or the Haswell i3 in tests now using graphics).

    I do like how much the A8 has improved. It has a lowish TDP, and yet not significantly slower than the A10. Considering the "product positioning" price of $112 from the graph above, the A8 makes sense as APU for a cheap "grandpa"/"kid" gaming rig or a general purpose computer. It's only slightly slower for CPU compute than the i3 costing $20, but much faster in games.
    Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    Sorry, meant to say "not using any graphics" in my reference to i3. Reply
  • dgingeri - Tuesday, January 14, 2014 - link

    Seriously, I wonder why they bothered with covering this. The desktop is definitely not the place for this chip. It could potentially be competitive in laptops, but only if they cut the price. If Intel didn't exist, this would be a decent chip, but as it is, Intel beats down AMD in every market except low end SMB servers. Reply
  • andrewpk - Tuesday, January 14, 2014 - link

    Would've really loved to a see a more direct comparison at price points with current gen hardware considering an A10-7850k Kaveri is going for $189 (as of 2014-01-14 on newegg) and the closest Intel Haswell chip is the i5-4430 at $189. Reply
  • hoboville - Tuesday, January 14, 2014 - link

    HSA makes good promises, but the catch-22 of minimal support of software for this new paradigm leaves AMD trailing because they are still having to rely on a sub-par architecture. They will continue to be far behind in FP workloads until they drop Bulldozer and its derivitives. The same goes for single-threaded applications.

    Good iGP efficiency and functionality really makes the most sense for mobile, as power savings are better or can be more fine tuned when you have a fully integrated system.
    Reply
  • n0b0dykn0ws - Tuesday, January 14, 2014 - link

    Any forthcoming HTPC tests? Especially 23.976? Reply
  • Mendoza - Tuesday, January 14, 2014 - link

    Thanks guys for the review. My question is since AMD changed the process for this chip which limited cpu clocks heavily did you guys try overclocking the iGPU at all?

    Very curious if the bulk process hurt the iGPU ability to be overclocked. Because the base gpu clock of Kaveri is very low compared to what the 7000/r7 series can run at.

    If you guys get a chance please try running GPU at higher clocks. The cpu performance leave it alone.
    Reply
  • testbug00 - Tuesday, January 14, 2014 - link

    Could you maybe test for playable FPS (when doing iGPU setups)?

    Knowing you get under 10 FPS on a game on extreme + 1080p is not helpful.

    Knowing what you get on medium at 1080p, or what you need to get over 30FPS at 1080p, would be far more useful.

    Thanks~
    Reply
  • nathanddrews - Tuesday, January 14, 2014 - link

    Should they then custom tailor each and every benchmark to suit each platform? AMD said it could do Sleeping Dogs 1080p at 30fps, but it can barely do 22fps at 1050p. The rest of the games AMD says can be played at 1080p30 are old enough to not pose a challenge really.

    If AMD had promised 720p60, they'd be a lot better off - and on par with PS4/Xbone.
    Reply
  • testbug00 - Wednesday, January 15, 2014 - link

    All that I ask if they look for settings where they get something that can at least be considered playable. 25-35FPS is the minimum for most people.

    They bench games that come it at sub 10FPS for everything (xtreme settings, 1080p) and I just want to understand why it is done this way?
    Reply
  • nathanddrews - Thursday, January 16, 2014 - link

    TH just posted their review which uses medium 1080p settings. Perhaps those are more to your liking? They also try out Dual Graphics mode with a discrete R7 240. It still needs work.

    http://www.tomshardware.com/reviews/a10-7850k-a8-7...
    Reply
  • nathanddrews - Thursday, January 16, 2014 - link

    Here's even more resolutions and tests at G3D:

    http://www.guru3d.com/articles_pages/amd_a8_7600_a...
    Reply
  • nader_21007 - Saturday, January 18, 2014 - link

    Do you see intel's 400$ cpu(4770R) how much struggle in playing games? in bioshock res1024 all amd apu's have minimum frame rate of more than 28, while 4770R have a minimum of 6.23 .
    this applies to other games. Iris pro have the lowest frame rates among IGP's, meaning unplayable games.
    Reply
  • dwade123 - Tuesday, January 14, 2014 - link

    Another AMD product failed to reach its hype. Them turtle cores are just turtles. Reply
  • dosmastr - Tuesday, January 14, 2014 - link

    looks to be getting pretty complicated with apples to apples being less and less possible.

    I would be curious to see if you spent 300 bucks amd vs 300 intel -- just on the board and CPU, kept everything else the same

    or even take what gpu is embedded into steamroller and pair that with an intel chip (of course later on use CFX on the amd box, if its there we should use it.... when they make it available...)
    Reply
  • just4U - Wednesday, January 15, 2014 - link

    $300 wouldn't really be fair.. and would negate the reasons to really go with a AMD type APU. Since.. you can get a i5/cheap board for that.. and while AMD boards are more feature rich at lower price points, there really isn't any point in splurging on what you saved to buy an even better one.. Reply
  • nos024 - Tuesday, January 14, 2014 - link

    Disappointing. Where's the 30fps@1080p performance? I was hoping for a mini-itx sff build for the living room. Time for a PS4. Reply
  • nathanddrews - Tuesday, January 14, 2014 - link

    Save your money. One of my HDTVs has a used HP 6200 SFF workstation connected to it. i3-2100 with 4GB RAM, 500GB HDD. I got it off Craigslist for $100 and tossed in a low profile 7750 for another $80. It easily handles 1080p60 at low-med settings in addition to being a great HTPC. I scored a slick deal getting it for $100, but I see them selling for $200 all the time. Still a bargain. Turning up the detail it can still sit above 30fps, depending on the game. I've been really impressed with it. It's dead quiet and the 7750 gets the power it needs from the PCIe slot.

    This isn't my video:
    http://www.youtube.com/watch?v=wBJRM7XIYNE
    Reply
  • tanjali - Tuesday, January 14, 2014 - link

    They got me on 28nm! Reply
  • Since1997 - Tuesday, January 14, 2014 - link

    How kaveri stands up aganist intel g3220 and weak gpu like radeon 7730?. Cost is more or less the same. Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    Seeing these benchmarks, I wouldn't be surprised if g3220 + Radeon HD7730 was a better performer for games. The Kaveri GPU is based on the 7750 architecture. However, it seems like putting HD7750 on die does Kaveri no good. The memory speed is probably the bottleneck here. Reply
  • samal90 - Tuesday, January 14, 2014 - link

    So basically, we won't know the real performance possibilities of kaveri until HSA and/or mantle becomes integrated in more software. By that time, carrizo will come out and people will forget kaveri. Reply
  • haukionkannel - Tuesday, January 14, 2014 - link

    Yes. but AMD did it ones with it 64 bit CPU. We just have to hope that HSA is something that Intel consider allso usefull feature. Most propbaly it will, but it may be so that the support will come when Intel jumps to the HSA train, so untill then it is just a promise of what me become. Reply
  • thomascheng - Tuesday, January 14, 2014 - link

    HSA is not going to be a big deal now, I think the biggest selling point of Kaveri is Mantle, True Audio for gaming, and the growing number of OpenCL apps coming out. HSA effect won't be felt for at least another year, but AMD needs to release the chips to get the ball rolling. Mantle is suppose to reduce the need of the CPU by a lot. If there are a handful of AAA titles that uses Mantle, I will probably pick up a kaveri apu just for those games. In a year, boost the mantle performance with a r9 270x on sale. That's my game rig, but my workstation will still be a intel based chip until HSA takes off. Reply
  • talos_2002 - Tuesday, January 14, 2014 - link

    How about testing gpu accelerated daily applications everybody uses?
    E.g. flash and x264 video playback, Skype video calls (uses DX since v5) and how it performs with heavy flash websites on modern browsers.
    Real life daily stuff :-)
    Reply
  • Sabresiberian - Tuesday, January 14, 2014 - link

    Every PC hardware business recognizes that a significant portion of sales are driven by the high-end successes, the products that capture the attention of builders. Sure, they may be a smaller part of your income in a strict sense, but the people that buy top-end tend to be the people that regular folks listen to when they are looking for advice. So, if I say "Intel is better" anyone in my family is going to buy Intel even though they are only spending $500 total on their PC.

    Competing for the top spot in your hardware lineup means you are thought of as being interested in producing the best products. Competing at a lower point, and only at a lower point, means people think of you as the "cheap option".

    AMD's growth, AMD's respect in the minds of the enthusiast community - meaning in the minds of the people that make recommendations to a large portion of the mainstream buyers - was created by bringing us AMD64 (x64) architecture and a CPU that was better than what Intel had at the time. It seems to me that the creation of top competing products is what caused AMD to grow - and the decision to back away from that is what caused them to shrink again.
    Reply
  • just4U - Wednesday, January 15, 2014 - link

    It's not that they backed away.. they simply fell behind.. and the Athlon days of supremacy didn't really do it for them financially.. didn't matter that they had the better CPU. Reply
  • MisterNiceGuy - Tuesday, January 14, 2014 - link

    This is so BS,
    We have here a site that teste new CPU, GPU, Chipset and they are still using Windows 7 to give Intel a little upperedge...?
    We have the i7 4770 paided with low GPU just to stay on TOP and amd new CPU does not get a crossfire action?
    How about you detail the overall price on the system test setup?
    ( the price the ventors suggests as seling price at least)
    Grow some honesty on the next benchmark.
    Reply
  • silverblue - Tuesday, January 14, 2014 - link

    I doubt the core parking updates benefit Kaveri as much as they would've with previous generation chips.

    I expect there will be Crossfire benchmarks in the near future. Nobody else seems to have done them, either.
    Reply
  • Panzerknacker - Tuesday, January 14, 2014 - link

    Really a shame to see Kaveri failing to deliver. The 7850 is not faster than the 6800, everybody can argument what they want but the benchmarks stand for themself. I am looking at gaming especially, and the 6800 is JUST as fast as the 7850. If you want 60 FPS in a modern game you are going to run at low settings, and then there is no improvement at all. It remains a question how Kaveri can be quite a bit quicker at higher quality settings, maybe the lower-clocked Kaveri CPU is simply bottlenecking the APU from getting higher FPS. Anyway, I rather run 60 FPS at crap settings than 30 FPS at higher settings so this is not for me.

    Really a shame, I had high hopes of finally seeing a good improvement in Bang/Buck, but because Kaveri is priced higher it will be even more expensive for the same performance.

    Really hoped AMD would break the stand-still in the CPU market. My friend owns the 3770 also reviewed here, he got it 1.5 years ago. The damn price of it INCREASED 30 euro's in those 1.5 years. Have you ever bought a PC that you could sell for MORE after 1.5 years?

    Kaveri could have ended this but it isnt. Basically what is happening in the consumer market is the following:

    Performance is staying the same, only TDP is SLOWLY decreasing, our computing devices at the same time are decreasing in physical SIZE. This will continue untill this current level of performance fits in a tablet, ultrabook or mobile phone. From then on, manufacturers will start to focus on increasing performance again. At that time the big power hungry desktop system will be dead.

    I can understand this, but the part that smells about it is that as a enthusiast, I don't give a crap about TDP. I just want a faster, bigger, more powerfull monster able to pump out more FPS. Basically, if you bought a new and fast computer anywhere in the last 3 years, you are probably settled for the coming 5-10 years because NOTHING faster will come out. And because the enthusiasts want the fastest and don't care about TDP, the prices will not be going down either. In the end, people pay for GFLOPS, not for Performance/watt. If a FX8350 would be only just 3% faster than a even priced i5, but at double the TDP, we enthusiasts would still buy that FX8350.
    Reply
  • nader_21007 - Saturday, January 18, 2014 - link

    Increased price of Intel's 3770 is because of intel failed with 4770. 4770 increased power over 3770 but the performance stayed the same, even got worst. Can't you see intel's 400$ cpu(4770R) struggle in playing games? in bioshock res1024 all amd apu's have minimum frame rate of more than 28, while 4770R have a minimum of 6.23 . As is the case in other games also. Reply
  • Mathos - Tuesday, January 14, 2014 - link

    I'm annoyed by the level of bias I see in this review. It's a given than a 4c8t i5 or i7 will out perform a 2module 4thread apu. Thats what they're recognized as in Win 8.1 now.. Which puts them in line with 2c4t processors, remember, still only 2 fpu's on that 2module chip. The i3 in 4330 haswell in this review seems to be the closest to a reasonable comparison on that grounds. Though even an i5 4570 could of been used for a near clock for clock comparison. Since that one is 4c4t. The i7 iris pro I can understand being in the gaming reviews. Though it's only available as an integrated on board solution, and the cpu can never be upgraded thereafter.

    The review would have been useful, if it were comparing the Kaveri APU's directly to previous generation, richland/trinity, and possibly Llano, on a clock for clock basis it would of been great. That way we could see how quantifiable the ipc improvement was in real world situations. So sadly, this has been one of the least useful reviews I've seen come out of Anandtech.
    Reply
  • nos024 - Tuesday, January 14, 2014 - link

    Why is that bias? AMD claims to perform better than an i5. If you go back to the pre-launch slides, they were benchmarked next to an i5 4670k. Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    i5 CPUs don't have hyper-threading. So they're 4 core, 4 thread. Moreover, AMD advertizes the APU as four core (which do share a ton of stuff with each other). Comparing the new APU against the i5 is totally justified. There are a lot of people expecting the Kaveri APU to elbow past the i3, and maybe get closer to the i5. Of poor souls. Reply
  • ATLSHOGUN - Tuesday, January 14, 2014 - link

    As a long-time AMD supporter (owner of athlon, x2, x4, x6, 8350 & a10) I am very disappointed. I feel like AMD has been lying to us for three years now. We were promised %30 gains and this is nowhere close to that. We were promised a high end steamroller chip and it appears we won't get that either.
    The clear truth is that AMD is now exclusively a low-mid range gaming chip designer. While these chips also do well in low-mid range laptops and desktops, AMD is no longer for the power-user nor do they want to be.
    My next PC in around early 2016 will absolutely be Intel and I won't feel bad about switching off of this team full of lies and disappointment.
    We should all be sad, because today is the day when competition in the cpu industry officially died.
    Reply
  • andrewaggb - Tuesday, January 14, 2014 - link

    Meh, it died a while ago. I had an amd 486 120, k6 200, k6-2 300, xp 1800, xp 2500, 64 3200, x2 4400, x2 6400, phenom 2 940, and later a phenom 2 965. That's a pretty solid customer. Since then I've had an i5 750, 2500k, 4570 in my desktops and i7's in my laptops. Intel is so clearly superior in cpu performance I just can't be bothered to give amd any money. I still buy their gpu's... Reply
  • mikato - Wednesday, January 15, 2014 - link

    I have a Phenom II X4 965 also and love it, but when I do a rebuild of our 2nd gaming machine it will have to be back to Intel. Reply
  • gruffi - Tuesday, January 14, 2014 - link

    This comparison is really stupid. i5 has 4 full cores or 4 FPUs. Kaveri still has only 2 modules or 2 FPUs. You should compare against i3, not i5. And from what I've seen so far, 65W Kaveri is on the level of the fastest Haswell i3. Which is an excellent result for AMD because they still have only 128-bit FMACs while Intel already uses a 256-bit FPU. So, Excavator will be quite interesting with its 256-bit FMACs. Reply
  • takeship - Tuesday, January 14, 2014 - link

    Not even 95W Kaveri is as fast as an i3-4430, which is not the fastest i3. Perhaps if you can manage to get everything HSA accelerated, but not otherwise. AMD themselves claimed just last week that Kaveri would compete with i5s (hence the 7850k priced at ~190$). So the inclusion is valid. Reply
  • just4U - Wednesday, January 15, 2014 - link

    You mean the 4330 yes? As the 4430 is the new starter i5... Anyway there is a argument that can be made in AMDs favor here.. People buying off the shelf bestbuy/walmart type specials with integrated graphics may get as good if not marginally better overall experience from the AMD build if the offset in hardware costs translates into something more being added into the AMD machine..

    However.. (and this is key..) knowing such companies it simply means higher profits for them at without having to worry about the buyer bringing the damn thing back because they crippled it so much to make the price appealing.
    Reply
  • nos024 - Tuesday, January 14, 2014 - link

    Tell that to AMD. They were targeting the i5 with their pre-launch slides so it's fair to test their claims. Seriously, I was hoping Kaveri would deliver, but it didn't. So just let it go. I don't even think the performance increase is worth the $50+ or so over Richland or Llano. Reply
  • asliarun - Tuesday, January 14, 2014 - link

    Ian and Rahul, Kudos of an extremely thorough review. You must have really burnt the midnight oil to get this done.

    One request: Can you please post your thoughts on how good Kaveri will be for HTPC? I have increasingly been feeling that a compact box that can do HTPC and do mid-level gaming in a pinch would be a really good idea for the living room (and media cabinet). Kaveri seems to be purpose built for that. However, I don't find any info on Kaveri's HTPC capabilities.

    Thanks!
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    I agree! There isn't much info about how Kaveri fits into the various computer roles, and I'm interested in HTPC possibilities as well. It seems with its advantages in video, and plenty of CPU power for HTPC needs, that it would be an outstanding fit. Reply
  • bobbozzo - Tuesday, January 14, 2014 - link

    Hi, both the 45w and 65w tables on page 1 list the A8-7600.
    AFAICT, the A8-7600 is 65w, so a different part number should probably be on the 45w table.

    I'm looking forward to an HTPC comparison between Kaveri and Haswell.
    CPU performance differences seem mostly irrelevant nowadays for HTPC, but I'm wondering which would have better 4k playback, etc.

    Also wish some AMD ITX boards would be made with some decent (non-Realtek) NICs.

    thanks!
    Reply
  • T1beriu - Tuesday, January 14, 2014 - link

    "You may notice that the Kaveri model listed is the same model listed in the 45W table. This is one of the features of AMD’s new lineup – various models will have a configurable TDP range, and the A8-7600 will be one of them." Reply
  • hyperspaced - Tuesday, January 14, 2014 - link

    I am looking to build a nice HTPC to run XBMC, maybe with some casual gaming.
    The A8-7600 @45W is ideal for my needs: decent CPU performance, killer GPU at $120...

    I'm pretty sure those A8's will sell like hot cakes.
    Reply
  • bobalazs - Tuesday, January 14, 2014 - link

    A8-5500 is 65W TDP and not 45W as stated on page "Testing Platform" Reply
  • Ryan Smith - Tuesday, January 14, 2014 - link

    Thanks! Reply
  • lorribot - Tuesday, January 14, 2014 - link

    I still don't get the point of integrated graphics for anything other than general 2D use. In 3d gaming they are of no real use on any sensible sized screen. They waste die space cost huge amounts to extra to develop and if you put a dGPU in have absolutly no use at all.
    Granted if you can offload all the FP to the GPU and just do Int on the CPUthere may be some benefit but that is along way off, probably another two or three years before that becomes a reality.
    For the majority of user scenarios Intel are in the right area, AMD are way off the mark with their sickly CPU designs, and by the time iGPU performance becomes a real issue Intel will have the hardware in place with their faster development cycles.
    Reply
  • nos024 - Tuesday, January 14, 2014 - link

    Totally agreed. Two years ago I was excited about AMD's direction in APU. But after buying first generation and realizing that I was not satisfied with the performance (even as a casual gamer) and end up getting a discrete GPU. Two years or so after and this is still the case. The sad reality is that once they get these APU to reach 1080p @ 60fps, the world would have moved on to 4k performance at 60fps. Reply
  • andrewaggb - Wednesday, January 15, 2014 - link

    Yeah. The PS4 and XB1 designs both addressed the shortcomings, the XB1 with edram and the ps4 with ddr5. As AMD helped design both, I don't see why they couldn't have done a solution similiar to one or the other, with e-dram seeming the most obvious choice due to cheap ddr3. Even if it doesn't sell extremely well, it's tech you already have and it would be the first time in a long time that you could come out and show an overwhelming lead over intel with enough performance to actually meet consumer needs.

    Their current offering, though having better igpu performance than intel, can't run modern titles at good enough quality to get taken seriously.

    Releasing something today with XB1 equivalent gpu speed and roughly equivalent multi-core performance would have drawn a crowd and a whole lot of positive talk and excitement. Combined with Mantle, Steam OS, and already having XB1 and PS4, they could have really had some momentum, even with their comparatively lousy cpu performance.

    Feels like a real opportunity wasted to me. Sure they could announce that next year... but it's another year for intel, nvidia, and who knows who else to catch up. First to market with time to capitalize can pay off for years and years. Look at the iphone and app store. That comes from being out first with no serious competition for a long period of time. If AMD had an XB1 equivalent APU today, + mantle, guarantee they'd gets lots of mantle support and have a good couple years at least.
    Reply
  • TEAMSWITCHER - Tuesday, January 14, 2014 - link

    I very much agree with your sentiment. I have absolutely no use for the graphics that are included with these devices...AMD or Intel...it's all garbage to me. Even without a GPU, I would still buy an Intel 4th Generation Core processor because it's the best at what I need a CPU to do. Intel is wasting hundreds of millions of transistors on every desktop processor they manufacture. Think of the incredibly small dies and high yields they could have enjoyed.

    And all those who would have been interested in this crap, now have even lower cost ARM tablets that can easily replace the low cost PC's these devices were meant to power. That's a fitting end given the "our crap is slightly better than your crap" game they wanted to play.

    The industry should abandon Integrated Graphics on all desktop processors. Intel's upcoming Haswell-E parts should be the ONLY desktop parts. Any part with a GPU should be reserved for laptops, convertibles, tablets, or other small form factor segments.
    Reply
  • nader_21007 - Saturday, January 18, 2014 - link

    "In 3d gaming they are of no real use on any sensible sized screen". You are either blind or an intel fanboy.I agree that intel IGP is pointless because it's very weak, But AMD' Apu's can play every game at 720p or even 1080p Comfortably. If you have higher expectations you have to pay for a pricey GFX card.Hardcore gaming is not Free LOL. Tell me if you could get a better IGP/GPU from Intel. Reply
  • PolarisOrbit - Tuesday, January 14, 2014 - link

    In the GPU benchmarks the charts were all divided into categories "Performance," "Quality," and "Xtreme." I went back to Testing Platform page to discern what these meant but it didn't say. Any clues? Reply
  • Ryan Smith - Tuesday, January 14, 2014 - link

    Check the first page of the GPU performance section; it goes over the different categories. Reply
  • Th-z - Tuesday, January 14, 2014 - link

    Nice review and very nice info from Rahul. Questions and comments:

    1. Not sure why you guys think this should be called SoC. Its MB still has a southbridge and other components to make a complete system.

    2. Page 2, first paragraph: "Apple’s Mac Pro dream" and "by doing the dream on a single processor". What is the "dream" you are referring to?

    3. Page 9, Table "Intel TestBed", i7s should be 4C/8T.

    4. Don't you guys think it's odd that how close 95 W A10-7850K compares to 65 W A8-7600 and 45 W A8-7600 in gaming? With 33.3% more SP, it should have more noticeable increase in FPS. We see better spread with Richland, yet all three of Kaveri are so close to each other, which really makes 7850K unattractive with more than double the TDP compares to 7600 at 45 W.

    Could it be it may have been bottlenecked by memroy bandwidth? Sooner or later AMD would need to do their own embedded RAM or putting GDDR5 on MB -- the world's first user configurable GDDR5 on MB, that would be something.
    Reply
  • T1beriu - Tuesday, January 14, 2014 - link

    4. Could it be because all of them have the same 512 core GPU clocked at the same speed of 720Mhz? Reply
  • T1beriu - Tuesday, January 14, 2014 - link

    Nope, I'm just blind. Reply
  • ws3 - Tuesday, January 14, 2014 - link

    The Iris Pro results have got to be very worrying for AMD.
    Yes, Intel is still behind in integrated graphics, but they are improving by leaps and bounds, and their CPU performance is miles ahead of AMD.
    I loved my Athlons, but I think AMD is on the verge of having nothing to offer.
    Reply
  • UtilityMax - Tuesday, January 14, 2014 - link

    Iris Pro is basically a proof that Intel can respond to AMD if the APU market does take off. Right now Iris Pro graphics are too expensive, but they could eventually move it onto the same die shared with a lower end i3 CPU. Reply
  • thomascheng - Wednesday, January 15, 2014 - link

    AMD already proven they can make a bigger core. Just look at the PS4 and Xbox One. I think while Intel can have the performance crown, only a small percentage will actually buy it. Reply
  • andrewaggb - Wednesday, January 15, 2014 - link

    The shame is that AMD should have released an APU similiar to the Pro. They have the tech, the drivers, mantle coming, the gaming deals... to actually pull it off. The biggest problem with iris pro (besides price) is that intel drivers suck. AMD drivers (at least in single gpu scenerios) work well. I think releasing cheap apu's is a good move as well, but at least one high end model would have been good. I think it would sell reasonably well if it could get 60fps at 1080p (ps4 level perf). I don't even think it has to be all that cheap, should be worth at least $100 premium. It enables new possibilities and smaller/quiter form factors. People like that. Reply
  • khanov - Tuesday, January 14, 2014 - link

    As you hinted at several times in your article, I think we could see even better iGPU performance from Kaveri with increasing memory bandwidth. I wonder if you might be interested in benchmarking some games at different memory frequency settings up to (and beyond?) DDR3-2400?

    I personally was hoping that Kaveri might launch with a quad-channel DDR3 memory controller. Yes it would require a new socket and the added signal routing complexity would push up board prices a bit. However, as FM2+ boards are so cheap, I don't see that as a big issue really.

    You also asked if anyone would be interested in a XBox One/PS4 style APU with significantly more GPU cores. Well I would! But again, that memory bandwidth issue needs to be resolved.

    Something like a six layer mini ITX board with 8GB of GDDR5 soldered underneath (where else would there be room to put it?) would be the brilliant if coupled with a PS4 style APU. It's a nice dream.

    Thanks for the very in-depth review. It was a pleasure to read.
    Reply
  • LarsBars - Tuesday, January 14, 2014 - link

    Thanks Ian / Rahul for the article, and thanks especially for having a page on the FX / server situation. I like to follow AMD news, and I trust AnandTech to be a reputable source that won't get emotional.

    I would love to know if you guys have any eta of any companies manufacturing 16GB DDR3 unbuffered non-ECC ram sticks, though.
    Reply
  • SilthDraeth - Wednesday, January 15, 2014 - link

    Going to go with a few other people here Ian, and Rahul, you guys point out that AMD wants to be able to play said games at 1080p at 30 frames per second.

    And yet, you didn't find a setting in your benchmark games that ran 30 fps, at 1080p, and then duplicated the settings for the other systems. I understand this will take a bit more work, but I would like to see it running Sleeping Dogs at 1080p, what settings where needed to hit 30fps, and then see what fps the rest of the systems hit at the same settings.

    Can you please update this review with that information?
    Reply
  • yottabit - Wednesday, January 15, 2014 - link

    I'm very disappointed to see meaningful conclusions attempted to be drawn from benchmarks of 2-6 FPS in an Anandtech article. Saying things like "The Iris Pro really suffers in Sleeping Dogs at 1080p" is ridiculous when all the FPS are < 7. More useful info would have been about why the Iris pro gets hit harder... I'm assuming because the eDRAM is less and less effective at higher res and settings, and Intel has yet to solve the memory bandwidth issue. Obviously the Iris Pro has the raw GPU horsepower because it's able to keep up fine at the lower resolutions.

    I'm more impressed at how far Intel has come than AMD (who has historically enjoyed a large lead) in terms of iGPU tech. Thinking back to things like the GMA graphics and I'm very happy to see Intel where they are today.
    Reply
  • yottabit - Wednesday, January 15, 2014 - link

    It's also pretty bad to say things like a very easy to miss ambiguous line saying "Unfortunately due to various circumstances we do not have Iris Pro data for F1 2013" and then reference the same charts saying "none of the Intel integrated graphics solutions can keep up with AMD" Reply
  • duploxxx - Wednesday, January 15, 2014 - link

    Obviously the Iris Pro has the raw GPU horsepower because it's able to keep up fine at the lower resolutions.

    you just proven yourself that you have no idea, since its the other way around.....
    Reply
  • yottabit - Wednesday, January 15, 2014 - link

    I doubt they were CPU bound in those instances, which seems to be what you seem to be implying

    There is a difference between being GPU bound and being GPU bound at certain settings and resolutions. I would assume the Iris Pro is going to suffer heavier from increases in resolution and detail because of its 128 MB eDRAM. If we could have seen increased quality testing at lower resolutions this would help affirm this. For instance shader intensive testing at a lower resolution...
    Reply
  • yottabit - Wednesday, January 15, 2014 - link

    Actually, we know they weren't CPU bound at lower resolutions, because the 6750 discrete card showed consistently higher results than the Iris Pro and AMD. If it were CPU bound you would think you'd see the same results with the 6750.

    What I was trying to say is that the Iris Pro is suffering disproportionately from some sort of scaling, and the article does little to compare what that is and what the advantage of the AMD is. Does the AMD have more shader power and that's why its able to scale better at high quality settings? Or does it have better memory bandwidth management and that's why its able to scale better at high resolutions? It's obviously scaling better somehow because the Iris Pro beats it in many benchmarks at low res but loses out at high res. Because the quality and resolution are coupled it's hard to learn what's going on. It might be a good system to use for Anandtech Bench but I would like to see testing data that is specific to the scope of the articles...
    Reply
  • ericore - Wednesday, January 15, 2014 - link

    This is the least impressive review I have ever seen on Anandtech; it's not horrible but its not anything near the usual fantastic mark I would give. I did like the bit on overclocking, but found the whole benchmark section completely designed without thought (or half-ass done), quite frankly you can remove the whole thing from the article. And where is the overclocked Kaveri in the benchmarks. First time, I've had to use other review sites.

    At 200$ cad, Kaveri will need a price cut if AMD expects this thing to sell well. No way that's worth 200$, 160 tops.
    Reply
  • MrHorizontal - Wednesday, January 15, 2014 - link

    HSA, hUMA and Mantle are all very interesting, but as has been pointed out many times, it's the API's that make or break a platform. On this note, there isn't a thing as a 'heterogenous' API. I can see situations where Mantle would help with Math acceleration in HPC contexts and also see where hUMA makes a lot of sense in desktop contexts. The HSA foundation has it spot on to standardise the instructions across all of these distinct technologies. In effect this would make HSAIL the 'holy grail' ISA. X86 would in effect be playing second fiddle to this. So, yes the real spur point is as mentioned - making the compilers, JITs and VMs aware of the stack and to use them when/if available. The issue is that there are only so many bytecodes a single program can support, so having the hardware speak the same language as an intermediary language like HSAIL means the bridge between hardware and software is made significantly easier. The proof as always is in the pudding, and it all depends on whether the design choices provided by HSAIL are good enough or not.

    You asked in the review whether it would be good to have a SoC with a much bigger GPU in it. The answer is yes and no. SoC's make a lot of sense to phone makers and heavily integrated high volume players, in particular, Apple. In fact, I'd be very surprised if Apple isn't talking to AMD about Kaveri and APU's generally. Because it's products like the iMac and Macmini that stand to benefit most from an APU - small computers driving enormous screens (if you realise that a Macmini is a firm favourite HTPC when driving a TV).

    However, while there are isolated use cases such as Consoles, iMacs, Macminis and the like for a SoC like Kaveri, what I'd like to see is some more effort on making buses and interconnects between chips beefed up. The first and most obvious low hanging fruit to target it here is the memory bus, because SDRAM and it's DDR variants are getting a little long in the tooth. RAM is fast becoming a contention point *slowing down* applications, particularly in high throughput distributed contexts.

    With AMD specifically, though, I'd like to see a (proprietary, if necessary) bus to allow all of the HSA, hUMA magic to happen betweeen a discrete CPU and GPU. In other words, I as an ISV or OEM can build a machine with a Jaguar CPU and a R9 card and employ the benefits of using a system heavily skewed to GPU usage (such a set up would be good for video walls, running 6-24 screens in a Single Large Surface Eyefinity set up). Alternatively, the bus, due to it's necessity to be quite wide should also be beefed up to access significantly more than 32GB of RAM. As a programmer, RAM is a massively limiting factor and I really could do with a high end laptop with 64-96GB RAM in it - why doesn't this exist? So buses. You saw how important HyperTransport was back in the day. Now we need a new one, a fully HSA compliant HyperTransport.

    The bus within interconnected components in a machine is also only half the problem. The next problem after that is making a bus capable to leashing together multiple machines all working together as a heteregoneous cluster.

    So yeah. SoCs are good, but there are simply too many use cases in business and industry where there is simply not enough justification to fabricate a custom SoC for a given task. Rather, it'd be far benefitial to provide all of these technologies in a modularised format, and ironically start transforming the PC to be more of a Transputer (Trannys were basically a machine with a super wide bus that you just plugged in modules. If you wanted more CPUs, plug a CPU module in. If you wanted graphics, storage etc, plug those in)

    So I think AMD are definitely on the right track - but even they say it's only the first part of the puzzle to move to a post-X86 ISA:
    - We need fully HSA-capable buses (first a HyperTransport-esque solution between discrete GPU and CPU, then a NUMA-esque solution to leash together clusters of machines)
    - We need it to be an open spec with not just AMD, but Qualcomm, ARM and Intel (though they'll need to be strongarmed into it do lose control of the X86 golden goose, but I think even they realise this with their efforts in Iris and Knight's Landing)
    - We need hardware to comply to industry standard bytecode, to meet the software people in the middle who all have to code to a specific specification

    And with that, we'd truly have an end to the PC and X86 as a dominant architecture but the ISA actually targetting the bus and the set of capabilities of all hardware modules together rather than that of a specific CPU.

    I'd also like to see an investigation or at least this question raised to AMD's engineers: why does Streamroller even need an FP unit at all? Can't the GPU effectively handle all FP work on the CPU side? Wouldn't it be cheaper/faster/better to put a fixed function emulation bridge to translate all X87 calls to GCN?
    Reply
  • mikato - Wednesday, January 15, 2014 - link

    For your last paragraph, I'm pretty sure something like that has been the idea since the beginning of Bulldozer/Piledriver/Steamroller/Excavator. It has an FP unit because they haven't gotten a way to move all that work to the GPU yet. Reply
  • fteoath64 - Sunday, January 19, 2014 - link

    "Now we need a new one, a fully HSA compliant HyperTransport." Yes! The dedicated people working on new SuperComputers are doing exotic Interconnects close or exceeding 1TBytes/sec speeds but limited by distance naturally. I see that for HyperTransport 3.0 one can implement 10 channels for high aggregated bandwidth, but that will use more transistors. In a budget conscious die size, using eSRAM seems to be a good trick to boost the bandwidth without overt complexity or transistor budget. The downside is eSRAM suck constant power so it becomes a fixture in the TDP numbers. Iris PRO uses 128MB of eDRAM while Xbox One uses 32MB eSRAM. I think the least amount would be somewhere around 24MB for the x86 to be effective in getting effective RAM bandwidth high enough!.
    The cascading effect if that the memory controller becomes complex and eats into the transistor budget considerably. Seems like a series of moving compromises to get the required performance numbers vs power budget for TDP.
    I am actually very excited to see an Arm chip implementing HSA!!.
    Reply
  • Samus - Wednesday, January 15, 2014 - link

    I don't get why AMD can't compete with Intel's compute performance like they were absolutely able to do a decade ago. Have they lost all their engineering talent? This isn't just a matter of the Intel manufacturing/fab advantage. Reply
  • zodiacfml - Wednesday, January 15, 2014 - link

    oh no, after all that, I just came impressed with the Iris Pro. I believe memory bandwidth is needed for Kaveri to stretch its legs. Reply
  • duploxxx - Wednesday, January 15, 2014 - link

    impressed with iris pro? for that price difference i would buy a mediocore CPU and dedicated GPU and run circles around it with any game.... Reply
  • oaf_king - Wednesday, January 15, 2014 - link

    I can point out some carpola here: "I am not sure if this is an effect of the platform or the motherboard, but it will be something to inspect in our motherboard reviews going forward." This sure discounts the major performance benefits you can achieve without faulty hardware. Search the real benchmarks on WCCF tech for A-10 7850 and be amazed. I can STRONGLY DOUBT the CPU has any issue running at 4ghz on a stock cooler/900mhz GPU. Yes the GPU overclock seems skipped over in this Anand review also, but should really pull it into the "useful" category for gaming! Reply
  • oaf_king - Wednesday, January 15, 2014 - link

    recall AMD had some leaks suggesting 4ghz CPU / 900Mhz GPU. Is that possible after all? Apparently not all motherboards are faulty. If the TDP tops out at 148 at 4ghz, given the conservative power envelopes already placed on the chip, I'm sure it gets very good performance for between zero and ten extra dollars, and a couple seconds in the BIOS. Reply
  • Fox McCloud - Wednesday, January 15, 2014 - link

    Maybe I was skim reading and missed it, but what are the idle power consumption figures for the A8-7600? I need a new home server and I have a iTX system, and mother boards with 6x SATA are slim. It seems the manufacturers only put them on AMD ITX boards, as Intel seem to max out at like 4. I wonder what power figures would be like if under clocked also. I might re-read the review!

    Excellent review as always guys. So in-depth, informative, technical and unbiased. This is why I love this site and trust your expert opinion :)
    Reply
  • Zingam - Wednesday, January 15, 2014 - link

    AMDs PR: "The processor that your grandparents dream of!" FYEAHA! Reply
  • keveazy - Wednesday, January 15, 2014 - link

    My i5 4440 costs the same as the a10-7850k. I don't think amd will ever compete. By the time they release something that would declare a significant jump, Intel would already have something new to destroy it by then. Reply
  • duploxxx - Wednesday, January 15, 2014 - link

    compete to do what? general tasks in a day, just buy an SSD... cost? did check your motherboard price? GPU, did you check the 4600 performance vs a10? it runs circles around it unless you want to be stuck on low resolution with your gorgeous fast cpu.

    you see customers fool themselve not knowing what to buy for what. hey i have the best benchmarking cpu, but on daily tasks i can't even count the microseconds difference.
    Reply
  • keveazy - Wednesday, January 15, 2014 - link

    Compete to do heavy software and GAMING. My motherboard only costs $60. Of course I "checked". Intel is not stupid. Their 4600 is not meant for gaming and they Don't boast about it. I use a dedicated GPU to run games.

    AMD keeps on advertising their igpu that it can run today's games like battlefield 4 on 1080p. Yeah, on Low settings...... Even the PS4's gpu is better than that.
    Reply
  • YuLeven - Thursday, January 16, 2014 - link

    If general tasks and gaming is on stake here, I say buy a cheap Pentium G for US$65 and a HD 7770GE for US$110 (today's price at newegg). The SSD too, of course. Voilá, you have fast general tasks and a GPU that makes the A10-7850K's GPU look like a toy. Reply
  • just4U - Thursday, January 16, 2014 - link

    Your I5 costs $200ish.. sales might get that down to 185 if your lucky.. I rarely see it though. Reply
  • keveazy - Saturday, January 18, 2014 - link

    I bought my I5 4440 for $175 dude. you serious? Reply
  • Laststop311 - Wednesday, January 15, 2014 - link

    the 45 watt one oe 65 watt once would make an awesome htpc loaded withh all the old school emulators and so,e mew games with controllers. Reply
  • JDG1980 - Wednesday, January 15, 2014 - link

    Old-school emulators will work fine on almost any modern system, including one with Intel's integrated graphics. Reply
  • fox1986 - Wednesday, January 15, 2014 - link

    Would it be possible to make a fm2 motherboard with 6 display outputs? Reply
  • fteoath64 - Sunday, January 19, 2014 - link

    "with 6 display outputs?". Buy a Netfinity 6 port Gpu card and stick it on the PCIe X16 slot and you are set. There was one Netfinity card that had 5 miniDP ports and a DVI connector, if I am not mistaken. Reply
  • figus77 - Wednesday, January 15, 2014 - link

    "It is interesting to note that at the lower resolutions the Iris Pro wins on most benchmarks, but when the resolution and complexity is turned up, especially in Sleeping Dogs, the Kaveri APUs are in the lead."

    It's simple... al low resolution and detail the raw power of the i5 crush the A10 when there is not GPU bottleneck...
    Reply
  • duploxxx - Wednesday, January 15, 2014 - link

    that could be tested inserting an equal discrete GPU on to both systems.... Reply
  • Fox5 - Wednesday, January 15, 2014 - link

    There's also the issue of the Iris Pro's 128MB edram. At a certain point, it probably is insufficient for the settings and resolution. Reply
  • BSMonitor - Wednesday, January 15, 2014 - link

    Power consumption numbers?? Interesting to see what adding that many transistors (particularly 128 GPU cores did to those) Reply
  • Da W - Wednesday, January 15, 2014 - link

    242 comments so far. Whatever people say, AMD still interests a lot of people and they have a future ahead of them. Reply
  • thomascheng - Wednesday, January 15, 2014 - link

    Lets hope they do well, or we will be stuck with buying $1000 Intel CPUs and Nvidia GPUs. Reply
  • TheJian - Wednesday, January 15, 2014 - link

    "For casual gaming, AMD is hitting the nail square on the head in its quest for 1080p gaming at 30 frames per second, albeit generally at lower quality settings."

    Maybe if they'd said for 1280x1024 gaming your comment would be true. Most of the games have mins at 1080p well below 20, and some even avg below 30. This resolution is NOT playable on these crap chips. Buy a vid card and Intel. Period. I really thought these would be faster, but then the downclock due to process happened. Gains in games where you are STILL unplayable isn't a gain. It is a waste of space to benchmark things you can't actually play at. I would rather have seen a dozen games benched at 1280x1024 and a few you KNOW you could run above 30fps at 1680x1050. 1080p here was pointless. AMD should be derided for even mentioning this res with so many games not even playable at avg fps never mind what happens when you click the MIN button in your charts.

    Discrete clearly has long legs, as anyone building one of these machines with the new APUs will quickly realize they need to buy discrete to enhance their gaming. I really don't think the dire situation here at 1080p will change until 20nm or more, where you may at that point have MORE games that CAN run 1080p ok, vs what you see here where it's just a joke today.

    The games don't even look the same when turning off every feature possible in the game just to hit 30fps. Do you want your games to look like a nintendo 64, or a PC? They should have grown the die a bit for more gpu, so at least 1680x1050 would be pretty good. I don't see AMD making money on cpus for 2yrs :( That means consoles + gpus have to hold the company up and that won't be enough to keep up with R&D in mobile, gpu, cpu. Consoles sold 7mil so far, so at $10-15 per chip ($100 price? per console assuming 15% margin? if they even get that) we're talking about a max of 105mil profits from consoles for the quarter. If they don't keep selling like it's launch month for the next 12 months I see them slowly getting weaker. They owe GF 200mil, so 4 of these console quarters would be ~400mil which is 1/2 blown on GF fines, and the other 200mil goes to interest on their huge debt each year. They need to make some REAL money on cpu/gpu/mobile or this never gets better right? We know cpu is basically out for 2yrs as they say in this article. OUCH. So gpu/mobile has to make something in those two years or this just gets worse and drivers etc will see more phase 1, 2, 3 fixing crap for ages.

    The only impressive thing I saw here was Mantle perf claimed by AMD in BF4. But how many times can you afford $8mil to get this done? I'm sure they amped things up for the first time showcase BF4, but how many times can you even afford $2-4mil for this stuff? And then do you get what the dev said in the AMD APU show (only one BTW), a 20% not being unreasonable for your efforts? Far below $8mil for apparently 45% in BF4 right? Devs will opt for the 2weeks to transfer a game to MOBILE first as NV showed can be done with any openGL game (all ps3, ps4 games, many pc games etc) like Serious Sam3 and Trine2 (most of the time according to anandtech was spent on controls, NOT the porting). Unreal 3 engine ported in 4 (yes...FOUR) days by epic/mozilla and it only took 10 people or so. Dice said 2-3 months on mantle. Devs might do 2 weeks just for more sales to a 1.2B unit market on mobile, they will need PAYMENT to do it for Mantle which gets ZERO extra profits (only makes people happy they bought AMD, no extra cash right?). I really hope ARM takes off on desktops, because we currently have an Intel only race and need someone with CASH to fight them. Bring on K1 (and all it's enemies) and all the games for mobile this will create (ported or new, I don't own a ps3 so I'd buy some of those ported that I can't get now). Since we have no real x86 competition any more we need ARM to pick up AMD's slack.
    Reply
  • Novaguy - Wednesday, January 15, 2014 - link

    Its going to depend on the settings; other reviewers who did 1080p + low to medium settings demonstrated playable frame rates for the a8 but not the intel igps. Reply
  • mikato - Wednesday, January 15, 2014 - link

    Ian/Ryan - This seems wrong - "For the 100W APUs at 1280x1024, there is almost no movement between the Richland and the Trinity APUs, except for Company of Heroes" under "IGP Gaming, 1280x1024". In this particular graph, it shows an improvement from Trinity to Richland and then not much improvement from there to Kaveri, except for Company of Heroes. Reply
  • Ryan Smith - Wednesday, January 15, 2014 - link

    Noted and fixed. Thank you. Reply
  • tekphnx - Wednesday, January 15, 2014 - link

    Looks like a pretty nice improvement for its intended market, namely the HTPC and casual gaming crowd. Calling the onboard GPU decent at 1080p is a laugh though, as other people have said. For 720p, sure, but not 1080p.

    Prices have shot up from the previous generation, which is unwelcome. And I very much lament the omission of Steamroller from the FX roadmap as an FX owner myself. AMD shouldn't abandon FX... the least they could have done if they're abandoning FX is to include a 6-core Kaveri part at the top end, but it looks like that's not materializing either.
    Reply
  • zodiacsoulmate - Wednesday, January 15, 2014 - link

    first 4 pages are way better than the last 4 pages :) anyway a great article i read like half an hour Reply
  • eanazag - Wednesday, January 15, 2014 - link

    In reference to the no FX versions, I don't think that will change. I think we are stuck with it indefinitely. From the AMD server roadmap and info in this article related to process, I believe that the Warsaw procs will be a die shrink to 12/16 because the GF 28nm process doesn't help clocks. The current clocks on the 12/16 procs already suck so they might stay the same or better because of the TDP reduction at that core count, but it doesn't benefit in the 8 core or less pile driver series. Since AMD has needed to drive CPU clock way higher to compensate for a lack of IPC and the 28 nm process hurts clocks, I am expecting to not see anything for FX at all. Only thing that could change that is if a process at other than GF would make a good fit for a die shrink. I still doubt they will be doing any more changes to the FX series at the high end.

    So to me, this might force me to consider only Intel for my next build because I am still running discrete GPUs in desktop and I want at least 8 core (AMD equivalent in Intel) performance CPUs in my main system. I will likely go with a #2 Haswell chip. I am not crazy about paying $300 for a CPU, but $200-300 is okay.

    I would not be surprised to see an FX system with 2P like the original FX. The server roadmap is showing that. This would essentially be two Kaveri's and maybe crossfire between the two procs. That sounds slightly interesting if I could ratchet up the TDP for the CPU. It does sound like a Bitcoin beast.
    Reply
  • britjh22 - Wednesday, January 15, 2014 - link

    I think there are some interesting points to be made about Kaveri, but I think the benchmarks really fall short of pointing to some possibly interesting data. Some of the things I got from this:

    1. The 7850k is too expensive for the performance it currently offers (no proliferation of HSA), and the people comparing it to cheaper CPU/dGPU are correct. However to say Kaveri fails based on that particular price comparison is a failure to see what else is here, and the article does point that out somewhat.

    2. The 45W part does seem to be the best spot at the moment for price to performance, possibly indicating that more iGPU resources don't give up much benefit without onboard cache like crystalwell/Iris Pro. However, putting the 4770R in amongst the benches is no super useful due to the price and lack of availability, not to mention it not being socketed.

    3. The gaming benchmarks may be the standard for AT, but they really don't do an effective job to either prove or disprove AMD's claims for gaming performance. Plenty of people will (and have looking at the comments) say they have failed at 1080p gaming scores based on 1080p extreme settings. Even some casual experimentation to see what is actually achievable at 1080p would be helpful and informative.

    4. I think the main target for these systems isn't really being addressed by the review, which may be difficult to do in a score/objective way, but I think it would be useful. I think of systems like this, and more based off the 65W/45W parts as great mainstream parts. For that price ($100-130ish) you would be looking at an i3 with iGP, or a lower feature pentium part with a low end dGPU. I think at this level you get a lot more from your money with AMD. You have a system which one aspect will not become inadequate before the other (CPU vs GPU), how many relatives do we know where they have an older computer with enough CPU grunt, but not enough GPU grunt. I've seen quite a few where the Intel integrated was just good enough at the time of launch, but a few years down the road would need a dGPU or more major system upgrade. A system with the A8-7600 would be well rounded for a long time, and down the road could add a mid grade dGPU for good gaming performance. I believe it was an article on here that recently showed even just an A8 was quite sufficient for high detail 1080p when paired with a mid to high range card.

    5. As was referenced another review and in the comments, a large chunk of steam users are currently being served by iGPU's which are worse then this. These are the people who play MMO's, free to play games, source games, gMod games, DOTA2/LoL, indie games, and things like Hearthstone. For them, and most users that these should be aimed at, the A10-7850K (at current pricing) is not a winner, and they would probably be better (value) or equally (performance) served by the A8-7600. This is a problem with review sites, including AT, which tend to really look at the high end of the market. This is because the readership (myself included) is interested for personal decision making, and the manufacturer's provide these products as, performance wise, they are the most flattering. However, I think some of the most interesting and prolific advances are happening in the middle market. The review does a good job of pointing that out with the performance charts at 45W, however I think some exploration into what was mentioned in point #3 would really help to flesh this out. Anand's evaluation for CPU advances slowing down in his Mac Pro is a great example of this, and really points out how HSA could be a major advancement. I upgraded from a Q6600 to a 3570K, and don't see any reasons coming up to make a change any time soon, CPU's have really become somewhat stagnant at the high end of performance. Hopefully AMD's gains at the 45W level can pan out into some great APU's in laptops for AMD, for all the users for games like the above mentioned.
    Reply
  • fteoath64 - Sunday, January 19, 2014 - link

    As consumers, our problem with the prices inching upwards in the mid-range is that Intel is not supplying enough models of the i3 range within the price point of AMD APU (mid to highest models). This means the prices are well segmented in the market such that they will not change giving excuse for slight increases as we have seen with Richland parts. It seems like lack of competition in the segment ranges indicate a cartel like behaviour in the x86 market.
    AMD is providing the best deal in a per transistor basis while consumers expects their cpu performance to ran on par with Intel. That is not going to happen as Intel's gpu inprovement inches closer to AMD. With HSA, the tables have turned for AMD and Intel with Nvidia certain will have to respond some time in the future. This is come when the software changes for HSA makes a significant improvement in overall performance for AMD APUs. We shall see but I am hopeful.
    Reply
  • woogitboogity - Wednesday, January 15, 2014 - link

    Ah AMD... to think that in the day of thunderbird they were once the under-appreciated underdog where the performance was. The rebel against the P4 and it's unbelievably impractical pipeline architecture.

    Bottom line is Intel still needs them as anti-trust suit insurance... with this SoC finally getting off the ground is anyone else wondering whether Intel was less aggressive with their own SoC stuff as a "AMD doggy/gimp treat"? Still nice to able to recommend a processor without worrying about the onboard graphics when they are on chip.
    Reply
  • Hrel - Wednesday, January 15, 2014 - link

    "do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community."

    I think as a low end Steam Box that'd be GREAT! I'm not sure the approach Valve is looking to take with steam boxes, but if there's no "build your own" option then it doesn't make sense to sell it to us. Makes a lot more sense for them to do that and just sell the entire "console" directly to consumers. Or, through a reseller, but then I become concerned with additional markup from middlemen.
    Reply
  • tanishalfelven - Wednesday, January 15, 2014 - link

    You can install steamos on whatever computer you want... even one you built your self or one you already own. I'd personally think a pc based on something like this processor would be significantly less expensive (i can imagine 300 bucks) and maybe even faster. And more importantly with things like humble bundle it'd be much much cheaper in the games department... Reply
  • tanishalfelven - Wednesday, January 15, 2014 - link

    i am wrong on faster than ps4 however, point stands Reply
  • JBVertexx - Wednesday, January 15, 2014 - link

    As always, very good writeup, although I must confess that it took me a few attempts to get thru the HSA feel dive! Still, it was a much needed education, so I appreciate that.

    I have had to digest this, as I was initially really dissappointed at the lack of progress on the CPU front, but after reading through all the writeups I could find, I thinks the real story here is about the A8-7600 and opening up new markets for advanced PC based gaming.

    If you think about it, that is where the incentive is for game developers to develop for Mantle. Providing the capability for someone who already has or would purchase an advanced discrete GPU to play with equal performance on an APU provides zero economic incentive for game developers.

    However, if AMD can successfully open up as advanced gaming to the mass, low cost PC market, even if that performance is substandard by "enthudiast" standards, then that does provide huge economic incentive for developers, because the cost of entry to play your game has just gone down significantly, potentially opening up a vast new customer base.

    With Steam really picking up "steam", with the consoles on PC tech, and with the innovative thinking going on at AMD, I have come around to thinking this is all really good stuff for PC gaming. And it's really the only path to adoption that AMD can take. I for one am hoping they're successful.
    Reply
  • captianpicard - Wednesday, January 15, 2014 - link

    I doubt Kaveri was ever intended for us, the enthusiast community. The people whom Kaveri was intended for are not the type that would read a dozen CPU/GPU reviews and then log on to newegg to price out an optimal FPS/$ rig. Instead, they would be more inclined to buy reasonably priced prebuilt PCs with the hope that they'd be able to do some light gaming in addition to the primary tasks of web browsing, checking email, watching videos on youtube/netflix, running office, etc.

    Nothing really up till now has actually fulfilled that niche, and done it well, IMO. Lots of machines from dell, HP, etc. have vast CPU power but horrendous GPU performance. Kaveri offers a balanced solution at an affordable price, in a small footprint. So you could put it into a laptop or a smart tv or all in one pc and be able to get decent gaming performance. Relatively speaking, of course.
    Reply
  • izmanq - Wednesday, January 15, 2014 - link

    why put i7 4770 with discrete HD 6750 in the integrated GPU performance charts ? :| Reply
  • ABR - Thursday, January 16, 2014 - link

    Kind of funny to hear "Iris" mentioned here given that the SGI O2 was an early example of a workstation with HUMA back in the mid-90's. :) There was no such thing as GPGPU back then (it might have helped save SGI if there was), but it provided a big help for memory-intensive things like texture-mapping. Reply
  • vinayshivakumar - Thursday, January 16, 2014 - link

    Why are the iGPU codenames mentioned in the CPU benchmark and the CPU is in brackets ?? What is the reason to mention the GPU at all in a CPU test ? To the untrained eye , it might look as is HD3000 is faster than the AMD R7... Which is obviously not true... Reply
  • vinayshivakumar - Thursday, January 16, 2014 - link

    Am I missing something here , apart from a bias ? Reply
  • Th-z - Thursday, January 16, 2014 - link

    I agree the labels can be misleading, in CPU benchmarks other than the second table "Agisoft PS v1.0 Mapping IGP" on page 10, all other are done with CPU only. Some tables got it right while some have iGPU names leading the labels and model names in the brackets. What should be in the brackets are iGPUs. Consistency in labelling is needed. It's probably not intentional, just a result of messing up the labels and get the review out ASAP. Reply
  • ryrynz - Thursday, January 16, 2014 - link

    You should be listing the GPU first in the Winrar results. The GPU is obviously not important in this benchmark and really shouldn't be specified at all. Reply
  • sonofsanta - Thursday, January 16, 2014 - link

    "...the benchmark follows Jenson Button in the McLaren who starts on the grid in 22nd place, with the field made up of 11 Williams cars, 5 Marussia and 5 Caterham in that order."

    You sure a 2013 McLaren could make it through that field? I reckon it'd still fail to make the podium.
    Reply
  • HaryHr - Thursday, January 16, 2014 - link

    What's the story with Iris Pro and huge difference between average FPS and minimum FPS?

    In BI performance test: 78 average and 6 minimum, quality test: 20 average and 1 minimum.
    If this is happening during gameplay this makes game almost unplayable. AMDs lineup has much smaller variance.

    It would be nice to see time frame tests.
    Reply
  • synce - Thursday, January 16, 2014 - link

    What a disappointment... I thought the 7850K would be the first rig I've built in years but it's barely an improvement over the 6800 and still no match for an i5 :( Reply
  • Rogatti - Thursday, January 16, 2014 - link

    Drivers still immature ... I will wait to form an opinion Reply
  • SolMiester - Thursday, January 16, 2014 - link

    WOW, looks to me that the A10 6800k is the better bang. I saw only 1 game where the R7 bettered the 6800K where any of them were playable.... Reply
  • ImSpartacus - Thursday, January 16, 2014 - link

    "Whether or not AMD decides to develop an APU with more than 8 GCN CUs is another matter. This is a point we've brought up with AMD internally and one that I'm curious about - do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community."

    I would love to see a massive APU with a Pitcairn-sized GPU. I wouldn't mind if it required a 130-150+W TDP.

    I love seeing machines like the 4770R-powered Brix Pro. Valve obviously likes it as well, because it became the Steam Box sent to developers.

    If I'm getting the performance of a ~$200 GPU, I'd happily pay $300 to get the CPU baked in. I'm hoping DDR4 will allow Intel or AMD to pursue this kind of market.
    Reply
  • Haravikk - Friday, January 17, 2014 - link

    I'm really interested to find out what the dual graphics performance will be like now that AMD's APUs are much more up-to-date at long last.

    CrossFire using an APU and a discrete GPU has been entirely underwhelming thanks to how limited compatibility was, so much so that you are always better off fitting a better discrete GPU instead. The only case it remained interesting was for small form factors where the more limited range of low-profile cards (and less aggressive pricing) meant that compatible cards weren't always so far behind newer ones.

    But with GCN graphics, HSA and Mantle on the way, maybe we might finally see the benefit of pairing the two rather than just using a better pure CPU and discrete graphics. Of course, we could see some benefit already if more games used OpenCL for physics, since that at least would run quite well on an APU even if the discrete GPU still did all the heavy graphical work.
    Reply
  • Klimax - Friday, January 17, 2014 - link

    Tests are very incomplete. Where are CPU+GPU results? AMD markets APU as total solution, so as such it should be tested. So far I have seen only Luxmark set of results on TechReport and when CPU+GPU was set, APU lost, because CPU part is sufficiently weak, that not even powerful GPU was able to save it.

    Also some OpenCL tests are not marked whether CPU+GPU or GPU-only, which can skew things badly.
    Reply
  • eanazag - Friday, January 17, 2014 - link

    I think this article is good, but it deserves and needs some dings on it because it is on Anandtech. When we look at the benchmark section there is a disconnect between this article (the applications used for benchmarking) and the Bench portion of the website. I have enjoyed and trusted the bench methodologies used on this site over the years and find it a resource I come back to, to help me make purchasing and hardware decisions because of the bench. Over time I will find it difficult to compare findings without the bench. I have no problem with you updating the bench as what has been done with SSDs and GPUs. I just think this article overlooked what becomes the real value of what this site offers if you overlook the ongoing bench portion of this site. I looked in the bench and haven't really seen a whole lot of anything recent on AMD APUs in the bench. That's my first ding.

    I suspect that if Anand was excited about AMD's release and had reviewed this product we would have gotten a better appreciation of what Kaveri brings to the table. I got the message that the 45W part is a very good in its segment and may be something worth considering, but I did not get what is mentioned in the article. What's the reality of running Kaveri on 1080p? You gave us the high settings, but no one with any sense is going to settle for less than 12 fps. So what are the realistic settings that will give us 20/30+ fps (game depending) performance. What are we really able to work with. I understand Anand is busy and likely not excited about another AMD CPU product release. I get it; it is hard to get excited about desktop and higher wattage laptop parts for AMD because they have been lack luster & underwhelming year over year for so long. I think this product fits in more than just the 45W space, but I didn't get that answer. I am still wondering, which is not what I would normally get at Anandtech. Normally, I would be getting more than I expected from a review. That's my second ding.

    This next is a general ding about the bench. You guys do great work here. I am on here about every day. The bench could use some tweaking in relation to the CPU and GPU between laptop and desktop and to an extent mobile. I want to know how a mobile GPU/CPU matches up to a desktop variant. Sometimes I even want to know how they match up in mobile devices (tablets and phones). I have gotten some of that info from articles, but it is rough matching up that info in the mobile (laptop) bench. Sometimes I do want to look at specific laptop info, but most of the time I am just considering the guts of the device. I decide between the trade offs of desktops to laptops in applications. The information is here, but it is not easy to compare. This is an opportunity for improvement, especially given the nature of the changing technology landscape.

    This is an addendum to the bench issue. I'd like to compare server processors too. It seems to me like in some niches we might start seeing server parts fill over the places where those highend desktop parts were. I already see this in AMD. Intel is on the cusp of this also. This is something to consider. I'd also like to see some more enterprise items covered in general. AMD is selling ARM servers already? Where have I been? And there are ARM based linuxes available.

    Thanks for the great work and timely info.
    Reply
  • HisDivineOrder - Saturday, January 18, 2014 - link

    "The point I am making with this heart-warming/wrenching family story is that the Kaveri APU is probably the ideal fit for what he needs. Strap him up with an A8-7600 and away he goes. It will be faster than anything he has used before, it will play his games as well as that new HD 6750, and when my grandmother wants to surf the web or edit some older images, she will not have to wait around for them to happen. It should all come in with a budget they would like as well."

    OR you could buy him a cheapo dual-core CPU from Intel plus a used mid-range card from a year or two ago.

    Far better performance for only a bit more. There is an argument forthese APU's in gaming and it's for those who want to game with something like a Gigabyte BRIX-sized computer that trades performance for size.

    Performance per dollar does not favor the APU simply because you can get so much more performance ignoring these APU's altogether.
    Reply
  • NinuGie - Saturday, January 18, 2014 - link

    such wow much fail. until APU comes in desktop format there will always be better alternatives. i want to see an APU compact format and similar to what they did on PS4. shared fast RAM .until then,they suck. i mean this is so bad implemented. and they advertise it for gamers lol. even if you CF the apu with a supprted gpu ,a HD 7770 is better so no thank you Reply
  • arthur42 - Sunday, January 19, 2014 - link

    The A10-6700T is in good supply in Europe; I'm running one for months now. Reply
  • aidnr - Monday, January 20, 2014 - link

    I'm pretty impressed with what they've done here, it looks like it is pretty evenly matched with Intel's offering - http://versus.com/en/amd-a10-7850k-vs-intel-core-i...

    Will be interesting to see how widely adopted Mantle is, seems pretty nice.
    Reply
  • samal90 - Monday, January 20, 2014 - link

    Can someone explain to me how come AMD is able to make an incredible APU capable of running games at almost 60fps on the new consoles but can't make a desktop APU to do so? Why can't they use the same architecture? Maybe I'm missing something...but I'm just wondering. Reply
  • meacupla - Wednesday, January 22, 2014 - link

    Because PS4/XB1 is not limited to using FM2+ socket, because they use a much slower CPU portion in PS4/XB1 and because of thermal limits. Reply
  • Calinou__ - Saturday, January 25, 2014 - link

    The consoles use lower level APIs, way more optimized. Reply
  • dsmogor - Friday, January 24, 2014 - link

    From a distance the Amd architecture actually looks a lot like PS3 Cell SOC: number of CPU cores + VLIV modules that access the same bus and memory. If the GPU cores have some sort of manageable local memory (dunno) I clearly see why Sony have choosen that for their next gen. Reply
  • Houdhaifa - Saturday, January 25, 2014 - link

    nice performance Reply
  • Theyear - Sunday, January 26, 2014 - link

    hi Reply
  • xtremesv - Tuesday, January 28, 2014 - link

    I'd rather give your grandpa a FX-4300 + GT640 GDDR5 roughly for the same price as A10-7850K and even better, for $20 more you ditch the FX-4300 for a FX-8120. Reply
  • thomascheng - Tuesday, January 28, 2014 - link

    You can get the 7850K from Microcenter for 129.00 right now, its only for a limited time. I think I'll bite at the price point. Reply
  • neal.a.nelson - Tuesday, January 28, 2014 - link

    No, that's the 6800K. The 7850K isn't even on the main AMD page.

    The AMD A10-7700K Unlocked Black Edition
    10 Computer Cores (4 CPU + 6 GPU)
    Features AMD Radeon™ R7 Graphics
    AMD TrueAudio Technology for immersive audio 9
    AMD A10-7700K
    $159.99
    REG. $179.99 045013
    Buy Now!
    BUNDLE & SAVE!
    AMD A10-6800K
    $129.99
    REG. $159.99 617795
    Buy Now!
    BUNDLE & SAVE!
    Reply
  • rrgg - Sunday, February 02, 2014 - link

    Is there a stated release date for the A8-7600 (Kaveri)? I know it was pushed out but to when? Thanks. Reply
  • Novaguy - Wednesday, February 12, 2014 - link

    Hmmm, any news about the a8-7600 availability? Or mobile parts?

    What I am really curious about is the performance of a kaveri a8 + r9-290m (aka 7970m/8970m rebadge) in a laptop (such as the msi model that was reviewed on anandtech). Will that get of the frame throttling that was going on in that msi laptop?
    Reply
  • DoctorBurp - Thursday, February 20, 2014 - link

    It is interesting to see how computers involved into having two “brains”: The CPU which works sequentially and is generally better with small amount of data, and the GPU which works parallel and is generally better with large amount of data.

    Us humans also tend to have two ways of thinking, and to demonstrate it let us talk about decisions making. When dealing with small amount of information we tend to take the sequential route: We investigate, write notes , explorer in depth and operate in a step by step manner it order to make the right choice. However when dealing with large amount of information it is impossible to remember each and every detail and investigating turns to be a tedious work. This is when we turn to our gut feeling, our intuition, and at the end make a decision because it “feels right”. This feeling is a result of a processing which took place backstage in areas like our sub subconscious, which as far as we know it works in a parallel way.

    Since computers have been invented by humans, and for the most part reflect the way WE think, it is not surprising to find these similarities. I for instance find it fascinating and funny at the same time :) Maybe by trying to imitate the way we think we will end up understanding our self better…

    Just thought of sharing it with you all.
    Reply
  • extremesheep49 - Friday, February 21, 2014 - link

    I don't know if anyone will even see this now but...

    "The reality is quite clear by now: AMD isn't going to solve its CPU performance issues with anything from the Bulldozer family. What we need is a replacement architecture, one that I suspect we'll get after Excavator concludes the line in 2015."

    I don't know that this conclusion is very fair considering this statement if you compare it to a previous article linked below. The linked article recommends a (currently) $100 100W A8-5600K. The Kaveri equivalent is a $120 45W CPU of approximately the same performance.

    Doesn't the linked article's recommendations contradict your Kaveri conclusion at least for some cases? Kaveri's CPU performance probably is sufficient for many discrete GPU setups.

    http://anandtech.com/show/6934/choosing-a-gaming-c...

    Quote from link:
    "Recommendations for the Games Tested at 1440p/Max Settings
    A CPU for Single GPU Gaming: A8-5600K + Core Parking updates"
    Reply
  • Novaguy - Sunday, February 23, 2014 - link

    Gaming performance is usually (but not always) GPU bottlenecked, not CPU bottlenecked.

    The reason why a trinity was getting recommended in a lot of gaming boxes was that in dollar limited scenarios, you'll often get better gaming performance mating a $120 quad core trinity with a $300 gpu, versus a $220 i5 with a $200 gpu.

    For even better results, mate an $80 Athlon II X4 750K if you're going discrete gpu, but I don't think the gpu-less trinity chip was available then.
    Reply
  • PG - Monday, February 24, 2014 - link

    I wanted to compare Kaveri to some other cpus not in this review. Bench would be perfect for that, but the Kaveri cpus are not listed there. Why? Can they be be added? Reply
  • Cptn_Slo - Tuesday, April 01, 2014 - link

    Well at least this shows that AMD is able to increase performance significantly given the appropriate die shrink. I'm a big Intel fan but a healthy company/market needs competition, and looks like AMD is able to offer that in at least some areas. Reply
  • zobisch - Wednesday, April 02, 2014 - link

    I have an h60 cooler on my 7850k with 2400mhz ram OC'd to 4.4ghz and I love it... I think the corner for APU's will really turn when DDR4 boards come out. I also would like to see an 8core, 24 compute gpu as well but that's probably a die shrink or more away. Reply
  • vickfan104 - Tuesday, May 06, 2014 - link

    An Xbox One/PS4-like APU is what I'm still looking for from AMD. To me, that seems like the point where an APU becomes truly compelling as opposed to CPU + discreet GPU. Reply

Log in

Don't have an account? Sign up now