Back to Article

  • Drumsticks - Monday, January 06, 2014 - link

    Awesome! I can't wait. Although CPU increases might be a bit underwhelming with the lower clock speeds. But I'm pretty excited to see what Kaveri brings; good luck AMD! Reply
  • bholstege - Monday, January 06, 2014 - link

    The beta calculation speed seems way off - its just a simple OLS regression, which is basically instant on modern hardware. My guess is all of the speedup is in the graphing, rather than the beta calculation. Reply
  • ViRGE - Monday, January 06, 2014 - link

    Keep in mind that this is LibreOffice Calc, which is horrifically slow. So slow the developers will pretty much be the first people to tell you that it's too slow. As such I can totally believe that Calc really that slow right now.

    The AMD/LibreOffice deal was announced back in July. It's basically a package deal that has AMD funding development of Calc so that they can finally refactor the code to improve its performance, while at the same time they'd also add OpenCL support for AMD. Consequently I'm far more interested in how the final, refactored version of Calc will stand up. If this is comparing old/slow Calc to OpenCL calc, then it's not a very useful comparison.
  • nico_mach - Wednesday, January 08, 2014 - link

    I wondered why they did a Calc demonstration instead of a more interesting python or R stats crunching example. That makes perfect sense now. Reply
  • dylan522p - Monday, January 06, 2014 - link

    I don't understand how AMD thinks this will work. The amount of dev time it would take to make your application offload to the iGPU is too large. Plus any workload that is performance starved realizes much larger benifit from a beefy CPU and/or beefy GPU, not gimped on both. Reply
  • hpglow - Monday, January 06, 2014 - link

    I've been buying intel since the C2D hit the market, but I can still see where this processor would be handy. A emulation cab or a htpc come to mind. Or cases where price is the utmost concern. I don't see myself replacing intel in my two high end game rigs but there are use cases for these. Reply
  • dylan522p - Monday, January 06, 2014 - link

    > A emulation cab or a htpc come to mind

    Why? Intel still has the better single threaded performance and an i3 wins on multihreaded vs all the APUs and the iGPU is good enough for emulation. All your really doing by going APU on HTPC is wasting power.
  • Gigaplex - Monday, January 06, 2014 - link

    Intel drivers have some issues in HTPC environments. The one I'm currently seeing issues with is lack of full range RGB over HDMI. Reply
  • bwat47 - Tuesday, January 07, 2014 - link

    I have an old laptop that I use as a ghetto media center with xbmcbuntu, its got intel ironlake graphics. with a recent kernel (3.10 or newer I believe) I was able to get limited RGB over HDMI working :) Reply
  • FlanK3r - Tuesday, January 07, 2014 - link

    This is not true.Richlands A10 are stronger in multithread than core i3 Reply
  • KenLuskin - Sunday, January 12, 2014 - link

    Do you understand that supercomputers use GPU acceleration chips?


    For the same reason that Servers will employ AMD APUs.

    Maybe you need a remedial course about APUs?

    I suggest these 2 videos:

    1) Revolutionizing computing with HSA-enabled APU

    2) "Your processor's IQ: An Intro to HSA"
  • BMNify - Monday, January 13, 2014 - link

    "FlanK3r: Do you understand that supercomputers use GPU acceleration chips?


    For the same reason that Servers will employ AMD APUs.

    Maybe you need a remedial course about APUs? ...

    ...Revolutionizing computing with HSA-enabled APU"

    wow there dobbin
    your logic is Non sequitur ((Latin for "it does not follow"), in formal logic, is an argument in which its conclusion does not follow from its premises.)

    first of all they call Graphic Processing Unit's specialized Co-Processors for a reason...
    in that they are specially constructed to perform lots of compute intensive thread-able tasksfor algorithms where processing of large blocks of data that can be done in parallel.

    for instance Released in 1985, the Commodore Amiga was one of the first personal computers to come standard with a GPU. The GPU supported line draw, area fill, and included a type of stream processor called a blitter which accelerated the movement, manipulation, and combination of multiple arbitrary bitmaps. Also included was a coprocessor with its own (primitive) instruction set capable of directly invoking a sequence of graphics operations without CPU intervention.

    it also had a Heterogeneous System Architecture in that it had "Chip RAM" that was shared between the central processing unit (CPU) and the dedicated chipset co-processors Agnus (DMA) controller (hence the name).
    Agnus operates a system where the "odd" clock cycle is allocated to time-critical custom chip access and the "even" cycle is allocated to the CPU, thus the CPU is not typically blocked from memory access and may run without interruption. However, certain chipset DMA, such as copper or blitter operations, can use any spare cycles, effectively blocking cycles from the CPU. In such situations CPU cycles are only blocked while accessing shared RAM, but never when accessing external RAM or ROM

    Prior to this and for quite some time after, many other personal computer systems instead used their main, general-purpose CPU to handle almost every aspect of drawing the display, short of generating the final video signal...

    moving on to today, in your average so called supercomputer gpu-compute intensive tasks/functions take up around 5% of the total time of a given app where the rest are sequential SIMD code usually....

    of all these supercomputer gpu-compute intensive tasks how many will you be running on your home/SOHO Heterogeneous System Architecture Soc exactly,
    none would be a fair guess....

    you don't really believe this AMD PR "processor's IQ" do you , ,here's a hint, processors do not have any IQ , the worlds best engineers cant even simulate cognitive responses in real time yet never mind simulated IQ....

    and for fun here's a little demo to show how a specialist gfx Co-Processor is good at fast gfx manipulation and not much else as it does not have a high sequential integer SIMD capability on a very high data interconnect such as todays CPU's and their AVX2 SIMD for instance.
  • BMNify - Monday, January 13, 2014 - link

    oops bad copy paste there , i OC mean Ken Luskin said Reply
  • ravyne - Monday, January 06, 2014 - link

    There's actually quite a number of applications which are bottle-necked both by the limited parallel throughput of CPUs (even with SSE and AVX) and by the latency / overhead of moving smaller amounts of data over the PCIe bus. Those applications will benefit enormously from having access to what's effectively a smaller, but practically latency-free GPU. Reply
  • inighthawki - Monday, January 06, 2014 - link

    Go check out Microsoft's C++ AMP library. C++-like semantics but all Direct Compute under the hood. The dev cost is minimal if the performance is worth it. Reply
  • nico_mach - Wednesday, January 08, 2014 - link

    More importantly, if these are games/ apps already written for xbox one/ps4, the porting time could be minimal. Reply
  • duploxxx - Tuesday, January 07, 2014 - link

    depends on who actually is interested. just lok at following announcement:

    if java source code is optimised, how many do you think just use java do write applications....
  • tcube - Tuesday, January 07, 2014 - link

    many! Boatloads of huge business softwares are written in pure java... Reply
  • Megatomic - Tuesday, January 07, 2014 - link

    We use Java extensively in our data acquisition systems. Java is far more pervasive than many would believe. Reply
  • mikato - Wednesday, January 08, 2014 - link

    HSA exploitation in Java would be huge!

    Also, there are plenty of places where it would be a big advantage to a company to enable big performance improvements with some resources spent developing the ability to use HSA. It's a direct competitive advantage in certain situations.
  • KenLuskin - Sunday, January 12, 2014 - link

    Companies like MSFT, Google, Apple, Adobe are ALREADY developing large scale programs.

    Sure for the small time application putz, you would NOT understand....
  • BMNify - Monday, January 13, 2014 - link

    LOL, it seems you didn't actually read that link so who;s the putz (noun
    1.a stupid or worthless person.) given it would help support your AMD HSA SOC stance if you actually understood even the basics...
    here it is again
    "HSA targets native parallel execution in Java virtual machines by 2015

    AMD-led consortium takes steps to break multicore programming barriers
    Oracle seeks Java performance boost, joins HSA Foundation
    Industry consortium to tackle open spec for software use across multicore
    HSA president says parallel acceleration 'belongs in the Java virtual machine'

    By Agam Shah
    IDG News Service |Aug 26, 2013 5:28 PM"

    here try this remedial central Processing basics video
    See How the CPU Works In One Lesson
  • axien86 - Monday, January 06, 2014 - link

    The range of market segments that AMD's Kaveri addresses from 15w/35w notebooks, to micro HTPCs, all-in-ones to 65w/95w desktops, to powerful APU servers is more remarkable than other competing solutions because of the new HSA architecture that unleashes the power of the Steamroller cores and powerful Hawaii-based GCN engines. Reply
  • silverblue - Tuesday, January 07, 2014 - link

    Whoa boy, steady on. That's what we'd like to see, but we're not there yet. Power is nothing without control, or in this case, hardware doesn't mean a thing if the software support is lacking. Reply
  • khanov - Monday, January 06, 2014 - link

    The major stumbling block for me is the lack of memory bandwidth. It has already been shown that the Richland series APUs scale well with increasing bandwidth for GPU intensive tasks. And yet here we are with a significantly more powerful GPU component, and yet still we have only dual channel DDR3? Reply
  • TheinsanegamerN - Wednesday, January 08, 2014 - link

    DDR4 hasnt launched yet. amd doesnt have much of a choice. desktop users can use 4 memory chips to mitigate the issues, but laptop users are out in the cold (and any mini itx users) Reply
  • DigitalFreak - Monday, January 06, 2014 - link

    As with anything AMD related, I'll believe it when I see the real benchmarks. Reply
  • bsd228 - Monday, January 06, 2014 - link

    Esp when their own marketing benchmarks use a non HT i5, and one of them only gets a 7% gain! I know AMD wins the internal graphics battle, but that's not one I care about personally. Reply
  • Drumsticks - Monday, January 06, 2014 - link

    You realize every single desktop i5 doesn't have hyperthreading, and the HT model costs over $300? And a 7850k will cost at most $190? It's not an unrealistic comparison. Reply
  • Doomtomb - Tuesday, January 07, 2014 - link

    Core i5-4570K with hyper-threading is $220-240. Reply
  • Doomtomb - Tuesday, January 07, 2014 - link

    Scratch that, that's the i7 my bad. Reply
  • ingwe - Monday, January 06, 2014 - link

    I hope these are great, but I don't expect them to be. We need some competition in more than just ultra mobile spheres... Reply
  • CosmonautDave - Monday, January 06, 2014 - link

    I hope they update their Dual Graphics compatibility--would be a nice feature if one was using a dGPU, to get a bit more bang for the buck out of the APU. Just a question of which cards I suppose? They tweeted about it: Reply
  • nuby - Monday, January 06, 2014 - link

    Hows Linux support with AMD's new APUS? Reply
  • tcube - Tuesday, January 07, 2014 - link

    not great but ok... I managed to install their drivers and run a few things on ubuntu with a richland APU... Wasn't a walk in the park but it ultimately worked, plus the open source drivers are also ok-ish. All in all still far behind the windows optimizations but slowly getting there Reply
  • gradinaruvasile - Tuesday, January 07, 2014 - link

    I compiled the latest kernel (includes the radeon kernel driver module), mesa (opengl), xf86-ati (user space driver) and drm directly from git (on Debian Testing distro). On Trinity (A8-5500) and Richland(A8-6500) the drivers run well, HD decoding is working great and Steam games such as the hl2/source engine based ones - L4D2, Day of Defeat, TF2 and other hl2 mods run perfectly with slight visual adjustments such as forced multi core rendering, no AA and vsync on. Reply
  • mikato - Wednesday, January 08, 2014 - link

    Good info, thanks! Reply
  • Drumsticks - Monday, January 06, 2014 - link

    Bias is still bias no matter how much (or not) you try to hide it. Reply
  • Hrel - Monday, January 06, 2014 - link

    So... AMD is just never gonna be competitive again huh :( Reply
  • Krysto - Monday, January 06, 2014 - link

    Very interesting! Now all AMD needs is to be on a competitive node for once. They need to jump to 14/16nm FinFET as soon as it's available in 2015 (with their next-gen chips, obviously).

    Forget about cost-cutting. AMD needs some good PR, and it's going to get that PR if it has great performance and power consumption numbers, especially when compared to the competition. Always being 2 nodes behind Intel, isn't helping much with that...
  • editorsorgtfo - Monday, January 06, 2014 - link

    Do not forget that intel's 22 nm node is a marketing name and actually lies around 26 nm node by generally accepted worldwide fabs standards. GloFo is going to present true 20 nm node in the middle of this year. Next-gen Carrizo should be expected to be true 20 nm, but it depends on foundry readiness however. I'm little confused, why Carrizo is not going to have implemented DDR4 memory controller yet. Reply
  • dylan522p - Tuesday, January 07, 2014 - link

    Actually no. The process node number hasn't been the actual process node for a while. 16nm FinFet is actually just 20nm with FinFet as Gflo and TSMC and Samsung are trying to pretend they are tied with Intel. Currentl, Intel is 1.5 gens ahead but Broadwell will make it 2.5 by 20nm without FinFet is coming soon as well so it will be back to being 1.5 gen behind. Reply
  • editorsorgtfo - Tuesday, January 07, 2014 - link

    Do you have a source for that information?
    My source is
  • arnavvdesai - Monday, January 06, 2014 - link

    I actually code on JVM based languages (think Java, Scala, Javascript) . There is a project which AMD is working on with Oracle which will offload certain instruction set to the GPU without the need for the coder to do anything special. If that project pans out and gives the gains it is supposed to deliver, it would be pretty awesome and would change my life for the better at least. Reply
  • duploxxx - Tuesday, January 07, 2014 - link Reply
  • chizow - Monday, January 06, 2014 - link

    20% IPC improvement from Steamroller is short of some lofty expectations from AMD fans, but still impressive. I also like how AMD committed to a hard release date that isn't months in the future, hopefully it performs well and we see what it is capable of in about a week. Reply
  • jospoortvliet - Thursday, January 09, 2014 - link

    Too bad the frequencies are down about 15%... Making for a mere ~5% performance increase, while 20% would already have been marginal :( Reply
  • GummiRaccoon - Monday, January 06, 2014 - link

    The grammar of this article drives me crazy, can you stop using the British style of referring to companies as groups of people? AMD is a singular entity. It is A corporation, not some random group of people. Beyond that you don't even keep it consistent. Just remember, "The United States is" not "The United States are." I cringe every time I see "AMD have" or "AMD are" and you don't even do it for "AMD does" or "AMD calls." Also I notice you store passwords as clear text, this is a disaster waiting to happen. Reply
  • editorsorgtfo - Monday, January 06, 2014 - link

    How did you get evidence that passwords are stored as text? Reply
  • GummiRaccoon - Monday, January 06, 2014 - link

    Reset your password. They e-mail you your current password. If they are able to send you your password, that means it is stored as plain text. That means they have a database that has everyone's password stored with their e-mail address and username. This is how everyone gets their data/accounts compromised. Reply
  • Ryan Smith - Tuesday, January 07, 2014 - link

    Just to be clear here, we're not storing your passwords in plaintext. When your new password is generated it's copied out to the mailer before being hashed and stored.

    And then we suggest you change your password again, just so what's in the email is no longer valid (in case you're compromised on your end).

    "You can login at and then change your password at to whatever you like."
  • ByeLaw - Tuesday, January 07, 2014 - link

    Hi Ryan,

    It's bad practice to send someones password via email. Email is normally not encrypted, so sending someones password in an email will be in clear text and can be intercepted. Adding to the fact that most people re-use passwords (yes I know, a bad practice ... but happens a lot) you could be compromising your users at other sites as well.
  • Ryan Smith - Tuesday, January 07, 2014 - link

    Again, the user selected password is never sent over email. We only send your randomly generated temporary password that way. Reply
  • ByeLaw - Tuesday, January 07, 2014 - link

    Ah, my mistake then. Sorry Ryan. Reply
  • Bob Todd - Tuesday, January 07, 2014 - link

    Incorrect assumptions aside, "This is how everyone gets their data/accounts compromised" isn't quite true either. While plain text storage of passwords would obviously be a no-no, password re-use is the real killer for large scale account/data hijacking. You shouldn't be re-using passwords anywhere, especially between important accounts (e.g. bank/email [which might have details of other accounts]/etc.) and a blog. Even if Anandtech was storing passwords in plain text (which they aren't), any compromise of their system should only allow an attacker to impersonate you here, not drain your bank accounts and ruin your life. Reply
  • Da W - Tuesday, January 07, 2014 - link

    OH MY GOD! The NSA is going to have the password to my Anandtech account! They will know what i posted, when, and that my name is not really Da W!!! Reply
  • Gigaplex - Monday, January 06, 2014 - link

    "...can you stop using the British style..."

    I respectfully disagree. You're aware that Ian is British, no? I'm neither British nor American, and I prefer the British way.

    I agree however that storing passwords in plain text is a no no.
  • nafhan - Tuesday, January 07, 2014 - link

    It sounds pretty clear that passwords aren't being stored in plaintext.

    If it were, though, that would be a good example of why you should use separate passwords on each website.
  • Ian Cutress - Tuesday, January 07, 2014 - link

    Yes, I am British, hence the writing style. Setting the word processor to English (U.S.) doesn't catch everything. Regardless, surely 'The United States' is a collection of states, and thus plural? Just like a company is a collection of people, and thus plural as well. This is how I was raised :D Reply
  • lmcd - Tuesday, January 07, 2014 - link

    "The United States" is a collection of states. Or, namely, a set of states. And since "The United States" is only one set, it is singular. Reply
  • mikato - Wednesday, January 08, 2014 - link

    There is actually a lot of history behind using "United States is" vs "United States are" and I think that common usage now and the reasons behind it easily trump any purist grammar viewpoint (even though the purist result is even debatable). People think of the United States more as one country than as a collection of states nowadays. "United States" is the name of the country so it doesn't matter if it has an "s" on the end.
  • erple2 - Friday, January 10, 2014 - link

    If you REALLY want to be technical, the name of the country is "United States of America". Typically the "of America" is dropped for expediency.

    Besides, one of the marvelous things about the language is that it evolves over time. And ultimately, that's what's important. Language evolves.
  • FITCamaro - Tuesday, January 07, 2014 - link

    Actually it is supposed to be "The United States are". Each state is a sovereign entity united under a central government. It wasn't until the 20th century largely that the nation was referred to in the singular. Reply
  • vFunct - Tuesday, January 07, 2014 - link

    actually "United States is" is correct.

    It is "United" after all, which means singular entity. It is also a singular legal entity, like a corporation (BTW a corporation is not a collection of people, it is also a singular legal entity..)

    If you skip the "United" part, then it is "the States are.."
  • Drumsticks - Monday, January 06, 2014 - link

    I love Intel and have Intel CPUs in all my devices (barring my windows phone), but lol.
  • Drumsticks - Monday, January 06, 2014 - link

    Why is this allowed Anandtech? You have the best site in all of the tech-internets but not a single way of flagging spam. Reply
  • aryonoco - Monday, January 06, 2014 - link

    It's Ian Cutress, he's good at many things, grammar isn't one of them ;-)

    A while ago, after he'd written one of his other technically brilliant but grammatically very poor pieces, I suggested in a comment that if he takes this job seriously (which I know is a side job for him), he should enroll in a writing course or something, but I got flamed by everyone who thought I was being rude.

    This isn't even one of his bad pieces, sometimes I really struggle to even finish his sentences. Which is a pity cause he's full of very interesting knowledge...
  • silverblue - Tuesday, January 07, 2014 - link

    I don't follow. Drumsticks is questioning why users such as Intellover are allowed to post... unless the insinuation is that Intellover = Ian Cutress? ;) Reply
  • mikato - Wednesday, January 08, 2014 - link

    Come on, did you read the same article I did? Have any examples? I had no problem and the grammar was correct. Reply
  • Gigaplex - Monday, January 06, 2014 - link

    Since when was the A10-6800K faster than the i5 4670K? Reply
  • artk2219 - Tuesday, January 07, 2014 - link

    Whenever anything graphics related comes into play. The I5 is a great CPU with a very meh igpu, and terrible drivers to boot so that isn't helping anything, Richland currently smashes HD 4600, Kaveri will pull ahead even more so. The only thing Intel has thats faster than Richland is iris pro, and that only comes on $400+ processors. Plus its easy to be fast when you have 64 to 128mb of on die memory. Reply
  • theduckofdeath - Saturday, January 18, 2014 - link

    If you're buying an overclockable Intel processor, it is not very likely that you'll stick with the integrated GPU, and when you stick a dedicated GPU into both systems, the Intel outperforms the AMD with one hand tied to its back. Reply
  • aryonoco - Monday, January 06, 2014 - link

    AT badly needs a flag/report button, or at least an ignore one.

    It's one of the few websites where I actually read the comments cause they can, on occasion, be useful and informative, but when things like Intellover happen, it becomes very difficult to stay focused on the conversation.
  • Flunk - Monday, January 06, 2014 - link

    Well, it's certainly an interesting way to go. It could be the way everything ends up in the future. We'll see when the final silicon is tested but I suspect that it will end up slotting in under Intel's i7 line somehow. Reply
  • Mathos - Monday, January 06, 2014 - link

    Most likely will, these are main stream APU's not high end desktop enthusiast CPU's. Though, I really want to see how well the 20% x86 ipc holds up. IF they managed a solid 20% ipc improvement across the board, it could make it worthwhile for them to release an AM3+ replacement for the current FX series. It would at least put their CPU's real close to intels on ipc again. Reply
  • cmikeh2 - Tuesday, January 07, 2014 - link

    Their relatively recently leaked roadmap (of course it could be wrong) shows them not releasing any new FX processors in 2014. Reply
  • silverblue - Tuesday, January 07, 2014 - link

    It's very much open to interpretation. If Steamroller is 20% faster per clock, is this single or multithreaded? If it's 20% faster per core, does this mean that despite the 400MHz drop, each core is 20% faster than Richland? One thing that appears to be set in stone is that the improvement is in the x86 hardware, regardless of HSA/hUMA and what-have-you.

    If we end up with a situation where Kaveri is 20% faster per clock, the A10-7850K will beat the A10-6800K by barely 10%, assuming the 20% is a realistic average. The downclocking of the Kaveri range to bring down power usage makes sense with a significantly larger GPU than before; I'd be very much interested in seeing the power usage of the A10-7700K considering that it allegedly has 384 GPU cores.

    I believe that Steamroller's main improvement in CPU terms is removing the multithreading bottleneck, something that has hampered the Bulldozer architecture. A 30% performance boost chip-wise over the original FX series (the FX-4130 may be an ideal comparison - 3.8GHz with 3.9GHz turbo) would be a decent jump over the course of two years.

    If we were to expect a 20% jump over Piledriver (still mainly multithreaded, I expect), then the following comparison may be of some use:

    Kaveri's aim should be to compete with the top i3s; each has four threads, the former having smaller cores with a view to parallelism and the latter having two fat cores with an ability to utilise them fully. I don't think we should expect Steamroller to compete with the i5s on a CPU core-for-core basis (especially not FP), and certainly not until Excavator. A lack of L3 will always hurt in such comparisons anyway.
  • Death666Angel - Tuesday, January 07, 2014 - link

    20% better IPC = instructions per clock. Got your answer in the article. :-) Reply
  • silverblue - Wednesday, January 08, 2014 - link

    It looks like an inference by the author as opposed to anything else. We'll see in a week! :) Reply
  • Notmyusualid - Tuesday, January 07, 2014 - link

    Really, I shake my head, when I see all the AMD-haters on forums like this.

    Are their CPUs REALLY that bad. I mean, honestly?

    I think not. There are positives and negatives on each side of the equation. I admit Intel have an edge in performance (especially in the mobile parts), but are they competitive? I think so.

    My buddy has an AMD 6-core cpu (forgotten the actual cpu gen code), and it is fast. He even managed to drop it into an old motherboard too, which I thought was cool also.

    He is not cracking passwords, nor playing games, but does some photographic image work, browsing, music & Bluray playback. No complaints at all.

    Let us try to think back to early 2000's, and they performance we had available then. Now THAT was slow, not what AMD are putting out today.

    So he is happy, I'm happy he is happy, so why all the tears?

    I'm not a fan-boi either way, I have dual 7970s, and on my 3rd Intel i7 Extreme in a row. But I think some of you here need a life.

  • herrington - Tuesday, January 07, 2014 - link

    because those loser got nothing to do in their mom basement, so the only thing they can do is trolling all day Reply
  • jabber - Tuesday, January 07, 2014 - link

    Oh I agree. If this was 20 years ago I could understand when you had the poor AMD/Cyrix type CPUs.

    But there really isn't a bad current CPU out there. All of them will get most people through the day.

    Plus as AMD demoed a couple of years ago, even tech fanboys can struggle to tell the difference between a top end AMD and Intel CPU in real world usage.
  • lmcd - Tuesday, January 07, 2014 - link

    Anandtech has demoed the difference, particularly in laptops -- an AMD A10m Richland + Radeon HD 7970m lost to an Intel i7 mobile + Radeon HD 7970m by a significant margin. Reply
  • Notmyusualid - Wednesday, January 08, 2014 - link

    Yes, I saw that article when it came out, shook my head, and issued AMD a 'mental report card' of must try harder, and I made reference to the mobile parts and their lower performance in my comment.

    However, my friend, as I also mentioned, doesn't play games, I do, hence my i7.

    I was merely stating, like another poster later did, that there is not a bad DESKTOP CPU out there. They all pull their weight, and most people can get by with the modern performance they have on offer.

    Hence the stagnation in the Desktop industry, many don't feel a need to upgrade so regularly anymore, having 'enough' performance for their everyday needs, some refused to accept Win 8 (no help either), whilst others moved over to tablets (that I cannot stand).

    Pervioulsy this well-heeled friend of mine, upgraded regularly, handing down his 'old' (lol) motherboard & parts to me as he saw fit. It was a boon for me at the time really, and has stopped since this 6-core AMD came into his life.

    @jabber, have a link to that demo you spoke of?
  • mikato - Wednesday, January 08, 2014 - link

    Here is my example of what you said. I just "upgraded" my wife's gaming computer. She's been gaming fine on a Core 2 Duo for a few years, and I recently put in a better Core 2 Duo (E8500) from ebay, a new video card (GTX 760), and a little more DDR2 memory and she's off to play Call of Duty Ghosts :) I think it will last at least another year that way. The CPU purchase was $30 and put the system over the system requirement there. The memory I scrounged and was gifted to from work. And the video card will obviously be carried over to a new build whenever that happens. So the way I think of it, upgrade cost for this end of the road system was just $30 and that's worth it to me to stave off a new build, new purchases, full reinstall of OS and games etc for a little while longer. Oh yeah, I'll sell that old CPU on ebay as well so net is probably even less. Obviously this is not everybody's situation but it shows us playing a brand new FPS game on pretty old hardware which likely couldn't have been done in the past. Rest assured, when the new build is done eventually it will be screaming fast. I wonder if AMD will have FX CPUs by then. Reply
  • silverblue - Wednesday, January 08, 2014 - link

    That was on an MSI GX70. Both generations of said laptop appear to perform very badly, but no reason was given for it. It could be the CPU, but something doesn't look right. Reply
  • YuLeven - Tuesday, January 07, 2014 - link

    They aren't "that bad". But your friend being able to do something with a particular CPU isn't a compelling reason to make it 'good', specially compared to it's competitors.

    It isn't about being slow or not. Why would any one come to a hardware specialized site and go 'If I'm happy with this computer nothing really matters'? You're clearly missing the point in here.
  • Mystiq - Tuesday, January 07, 2014 - link

    It took me a little while to figure this one out, and I hope it comes true, but I have serious doubts.

    AMD's goal looks to be like this:

    1) Release an APU with a powerful GPU and a good-enough CPU
    2) Develop an architecture that takes advantage of AMD's GPUs and leverages them like a CPU
    3) Develop an API that is light on the CPU to make up for AMD's historically-weak (FPU) CPUs
    4) Lean on said architecture and the API to make the weak CPU irrelevant because the GPU side of the APU is better at the relevant tasks anyway
    5) Profit

    AMD is trying to upend both Nvidia and Intel by trying to make up for their weak CPUs by say "fuck it" and treating the GPU as a powerful vector-coprocessor, much better than AVX. If it works, and games take advantage of both Mantle and HUMA (you have both a Kaveri APU and maybe a Radeon R7 card), I shutter to think of the framerate.

    Of course, we need to get rid of DDR3 already...
  • Mystiq - Tuesday, January 07, 2014 - link

    I'd just like to make it clear that the primary goal of Mantle is probably to take the CPU out of the equation when it comes to graphics performance, making AMD's APUs just as competitive as the Core i7 -- because the CPU has been made irrelevant because of Mantle. You can argue there will come a point again where we're putting out ridiculous photo-realistic graphics and the CPU again becomes a bottleneck but maybe they've thought of that, too. In the mean time, using HUMA can also let AMD slap both Intel and Nvidia. You need to code a program to take advantage of MMX/SSE/AVX/whateverIntelComesUpWithNext (or at least change compilers) so assuming HUMA catches on (a very difficult way forward), this could get interesting. Reply
  • extide - Tuesday, January 07, 2014 - link

    You need to code a program to use HUMA too... It's not a free lunch. Reply
  • abufrejoval - Tuesday, January 07, 2014 - link

    That's very much how I see it, too. But I see the gamble as being extremely problematic, because so far all of that only works in a very small niche: Where one APU is enough.
    With 1080p becoming the lower end in everything from smartphones to TVs or beamers, I don't see a single APU powerful enough to become mainstream.
    Yes, I can run Unigine Valley at 30FPS in a 1024x576 window on Trinity and Richland and perhaps with Kavery and Mantle it will work at 720p, but that still falls short of what most people will want. It’s half a PS4 and it needs to scale to twice a PS4 at least.
    Now if there was a *natural* scaling path, like the ability to simply add another APU or three to gain resolution (1080p and 4k), then they'd have me convinced.
    But currently the only way is to add a discrete GPU and HSA won’t scale with that.
    Well, compared to the current situation, where 50% of the APU die area become useless as soon as you add a dGPU (made Trinity/Richmond a hard sell for gaming IMHO) with HSA code could still use the iGPU portion of the APU for something useful, but basically a developer would again have to distribute their code into at least two distinct and individually sized buckets of compute resources and unless 90% of all PCs out there had it, nobody will very likely go through the effort.
    Kavery needs to be multi processor and designed with an interconnect which allows creating a single image SMP HAS system with at least four nodes in a gaming rig and perhaps a little more for HPC or even server use. I also believe that Kavery should be sold in GPU like modules with DRAM built in, soldered on and optimized for that specific module. GDDR5 or DDR4 depending on where you want to wind up in price and power. These modules should then either be mounted flat for the single or stuck into a passive backplane to create the dual, quad or whatever sized rigs.
    With Mantle AMD has game developers ready to invest some fundamental work to redo their engines, if now they could make it scale I could see this turn into a stampede.
    As a single APU only design, it could die because the size of the ecosystem is too small to sustain it.
  • mikato - Wednesday, January 08, 2014 - link

    About scaling... they can actually just add another CPU module or two, or GPU cores. These would be bigger more expensive APUs but they'd be what you want. This is sort of what they did with the Jaguar APUs that are in the new consoles. With their module based architecture, and APU marriage, they know they have this flexibility. We'll see what they choose to do with it. Reply
  • silverblue - Tuesday, January 07, 2014 - link

    AMD's weak FPU performance is more of a Bulldozer thing. In any case, in SSE calculations, it should still equal or beat Phenom II, even when referring to an FX-4xxx CPU. Reply
  • Dribble - Tuesday, January 07, 2014 - link

    Pushing everyone to adopt a different architecture only works if you control the market (i.e. in this case you are Intel). AMD are a small time player, for most companies it's simply not worth all the effort it would take to develop stuff for HUMA if 90%+ of your market can't use it. Hence while the tech may be great you know it will fail like the last few iterations of AMD cpu's which also had power point slides that promised great things for the cpu/gpu combo but have actually come to nothing. Reply
  • Mathos - Tuesday, January 07, 2014 - link

    Actually people aren't taking something into consideration here. They do now have the ability to control the market when it comes to gaming. All XBOne, and PS4 games are running said AMD processors, with huma already built into them. Those 8 core jaguar apu's were designed with that in mind. Any games ported from those consoles, to the PC will have support for HSA by default. Just the same as any game designed to run on all 3 will have it by default. This is apparent when you look at the ps4 where both the cpu and gpu in the APU use the same bank of DDR5 memory. Something else people forgot about.

    Something else to digest, Intel has been doing this for a while, it's called quick sync on their cpu's. So it's no surprise that AMD would make effort to utilize similar tech on their APU's as well.

    To the person saying it'd have to be twice a jaguar apu. Those cpu cores in the jaguar apu are minimal function x86-64 cores. Plus, They run at about half the frequency of these full steamroller cores. Which would effectively make a 4 core / 2 module Kaveri APU equal to that console apu other than the gpu component.

    Now about AMD's weak fpu. I have to look back at the older reviews with between the PII and the Bulldozer/piledriver. Every time I look at those, I realize that people were forgetting they were comparing 4 FPU's to 6 in the previous generation. Since BD/PD 8xxx CPU's were 4 module, they only had 4 FPU's. Where as the older PII X6, had 6 full cores, meaning 6 FPU's. On a per FPU basis, BD/PD was actually a lot stronger than Thuban/Deneb.
  • silverblue - Wednesday, January 08, 2014 - link

    BD's FlexFPU could do double the work of the FPU inside a Phenom II - two 128-bit instructions instead of one. Reply
  • abufrejoval - Thursday, January 09, 2014 - link

    The problem is screen resolution: 60FPS or even 30FPS on 1080p can't be done with a single 128-Bit DDR3 bus. And that's all APUs can offer today. PS4 using GDDR5 and Xbox using eDRAM should prove that to the less technically inclined. At the moment the *top* Trinity/Kaveri APUs are 720p or 1K only for reasonble gaming performance. And while AMD has a whole pethoria of APU bins going down, only dGPU is available going up and that doesn't include HSA.
    2K is the bare minimum you need today for anything stationary, consoles or PCs and this CES is about going from 4K to 8K for TV screens. So if you don't have a clear answer, vision and growth path today to address these resolutions any chance to come out of the niche is severely hampered.
    It doesn't mean you have to deliver 4K yet, 2K is enough, but unless developers know it will be there by the time they need it, they won't take the risk of jumping for HSA. Nor perhaps the consumers, who would certainly prefer a simple seamless upgrade for higher resolution or would like to play the same games on the differently sized screens around the house.
  • silverblue - Thursday, January 09, 2014 - link

    Doesn't the Radeon Bus mitigate the bandwidth limitation somewhat? Reply
  • lmcd - Tuesday, January 07, 2014 - link

    Which is a terrible idea because as weak at compute as Kepler is, Nvidia can upend their roadmap and go back to some of the ideas behind Fermi which would wipe out the compute advantage really quickly.

    And then there's Intel's mammoth Knight's Landing looming overhead.
  • dragonsqrrl - Tuesday, January 07, 2014 - link

    Where's that guy bitching about 'doctored' die shots over in the Tegra K1 announcement article? lol, Oh ya, I forgot this is an AMD product, so it's okay. Reply
  • Lepton87 - Tuesday, January 07, 2014 - link

    People should just accept the fact that there are many varieties of English and don't act offended because a Brit uses British English, because they think it's strictly an american site and everything should ponder to them. He refers to companies in plural, so what? Does that hamper your reading comprehension? Reply
  • YuLeven - Tuesday, January 07, 2014 - link

    Well said! Languages are about effective communication. What's the matter with a British author writing in British style? Now shall we drop this utterly pointless patriotism thing and focus on the purpose of this site: technology? Reply
  • Gadgety - Tuesday, January 07, 2014 - link

    I couldn't wait for Kaveri. Now I can.

    "AMD’s Tech Day at CES 2014, all of which was under NDA until the launch of Kaveri, AMD have supplied us with some information that we can talk about today." It's two weeks' until Kaveri will be available to the public, and at the US largest consumer electronics fair, there's no new information from AMD?? NDA? This, and the delay of Mantle. It's starting to feel bad.
  • testicular migraine - Tuesday, January 07, 2014 - link

    A shame that AMD is (are?) done and dusted. Without AMD to around, we can look forward to years of netburst itanium pleasure until something ARM-based can make Intel sweat again. Reply
  • mr_tawan - Tuesday, January 07, 2014 - link

    What I've love to see is an APU with Mid-range class integrated GPU (like, HD7790) + 4 Mid-range class CPU (somthing equivalent to Core i5 4670). It would be killer.

    Also please don't forget about the FX line, AMD!
  • YuLeven - Tuesday, January 07, 2014 - link

    I'd love to see this APU performing good and strong, but I'll wait and see.

    AMD promised huge IPC gains on Bulldozer, Vishera and Richland. None of them made it out of AMD labs.
  • silverblue - Tuesday, January 07, 2014 - link

    Depends on the workload, really. In some usage scenarios, FX could barely outpace Thuban let alone beat it, whereas in others it was definitely faster.

    Vishera was touted as, IIRC, 7% faster at the same clocks, or 15% faster at the same power usage. I would say that was correct. Not huge, but it wasn't expected to be.

    Richland was a refined Trinity, more in process than anything else. Clock speeds were better for equal or only marginally higher power consumption. Again, it wasn't touted as a huge improvement.
  • hero4hire - Thursday, January 09, 2014 - link

    So stagnant performance last 3 generations. That's what to expect as a quality roadmap? Reply
  • bwat47 - Tuesday, January 07, 2014 - link

    I wonder. Would it be possibly to utilize the APU for true audio if one has a dedicated card that doesn't support it? I currently recently got a 280x, which I recently found out doesn't support the new trueaudio, and I've been wanting to upgrade my CPU. Reply
  • yankeeDDL - Tuesday, January 07, 2014 - link

    I admit to be a mild AMD fan, so perhaps I am looking a bit too much into this, but I read very good news for all users in this review.
    First of all, starting off with the IPC disadvantage (mentioned by Anand) which is today in the order of 40% on the CPU side. This is a fair approximation, IMHO.
    With a 20% improvement, we're already cutting the distance by half, which is not bad, considering the noticeable advantage of AMD's APUs on the GPU side.
    Add to this that with Mantle the first data shows 45% increase in performance. All "free". That's tremendous, objectively.
    Add to this that the A10 6800K is almost half the price of the i5 4670k, which, roughly, is on par with in real life ... and you get a pretty nice picture painted for the 7850K.
    If the price stays the same ... it may be that finally, Steamroller is worth upgrading over the (extremely) aged Stars core of the Athlon/Phenom II with better performance across the board.
    Not a huge achievement, I must admit, but it was about time ... and better late than never.
  • mikato - Wednesday, January 08, 2014 - link

    I agree. The new tech features are where I see the big upgrade advantage. I don't know if the CPU side will be as much of an improvement as everyone wants, IPC is up but clock speed down. However, the CPU performance has been quite good since the last flagship APU A10 6800K. Gamers are looking for another 6 or 8 core though to put any CPU disadvantage to rest, and no integrated GPU required. With AMD not having such processor in any plans right now, gamers building systems have to accept the APU or go Intel. AMD may be choosing to do this to push the new tech and get adoption. That's probably the focus they need and the long game they need to play. AMD will lose gamers, but they are a small percentage of customers. All just theories. Perhaps if AMD can fully move FPU work to the integrated graphics on APUs, then the need for more cores evaporates. Reply
  • extide - Tuesday, January 07, 2014 - link

    I envision a point where CPU's don't even have an FPU on them anymore. They utilize the iGPU for all floating point math, scalar AND vector. Seems to me that this is where AMD is trying to go. Reply
  • mikato - Wednesday, January 08, 2014 - link

    Yep, I believe this has been part of the plan since their new architecture began (module based architecture and APUs with solid GPUs integrated on die go together). You have to admire their long term vision for being the underdog in the market. Reply
  • JohnHardkiss - Tuesday, January 07, 2014 - link

    Is it true that the Kaveri A10-7800 will have the same igpu as the Kaveri A10-7850k? When looking at this leaked data it seems to be the case: . Two points:

    1) What intel haswell cpu will the A10-7800 compare to, and
    2) It is said to be released on 14th january as well, but I can't find any new news about it.

    Anyone any insights about these points? Thanks.
  • azazel1024 - Tuesday, January 07, 2014 - link

    A couple of things to keep in mind. The current Iris pro Intel 4570R is actually pretty cheap. As cheap as these? Doubtful, but they are priced at $288 a tray. Not exactly $400+ to get in to Iris pro on the desktop.

    Next, for performance disadvantage in single thread. Yeah, it is pretty bad. Kaveri makes up some of it. However, keep in mind, AMD is claiming a 20% boost in IPC, but it is also cut back 10% on clock speed. So actual performance is only advancing about 10% or so. Its something, but it isn't any more than Intel's move from Ivy to Haswell was in effect. Which means AMD hasn't really gained ground between generations from Richland to Kaveri as Intel went from Ivy to Haswell. There might have been a slight improvement, but slight.

    Intel on the other hand has cut AMD's lead on iGPUs in the move from Ivy to Haswell. Intel generally seems to have gained more in the move from Ivy to Haswell than AMD is gaining in the move from Richland to Kaveri. AMD still generally has the lead, but Intel is cutting it down.

    Broadwell is claimed to be a pretty huge gain in iGPU for Intel again, which if that proves to be true, Intel might actually have a lead in performance or be very, very close behind.

    Unless Kaveri brings a lot better power use tech in to their APUs, Richland was pretty far behind Ivy and Haswell makes it darned right embaressing the difference in idle, light work load and performance per watt under heavy load. I don't see that Kaveri is making that much better. They have slightly better TDP, better efficiency under load, but likely they still won't be as efficient in performance per watt as Ivy in most work loads. Under light/idle its likely to be pretty bad...which factors in to the price advantage for AMD if you are looking at business machines or machines that are going to be on most/all day long.

    A savings of $10 or 15 in power over a year doesn't sound like much, but if a machine has a 3 year expected service life, or even 4 or 5 that gets to be a lot of power savings. A $190 processor ends up being as cost effective as a $235 processor after 3 years.

    Kaveri seems to be a good step forward, but it isn't a performance "win" to "dethrone" Intel. It pretty much leaves AMD in the same bucket. They can rule on entry level systems, gamers on a steep budget and some HTPC systems depending on the HTPC requirements (thinking gaming for an HTPC on a modest budget or very constrained chasis that can't accept a dGPU).

    In general Kaveri doesn't seem to be better for workstations than higher end Intel offerings, and I don't mean Ivy Bridge-E either. Also for a machine where cost is slightly less of an issue, probably still better performance with a higher end i5 or i7 Haswell processor and a dGPU.

    Or on a modest machine, a $130 odd Haswell i3 is probably going to give generally better system performance than a similar priced Kaveri will, and most users aren't going to care that the Kaveri might have 20-50% better iGPU performance. Still not a lot of stuff that is GPGPU capable through OCL and most of the stuff that is, either the user won't notice a hair of difference in performance or else it isn't stuff average users of modest machines are going to be using (and higher end machines again would benefit more from a $100 or 200 dGPU and something like an i54670).
  • UtilityMax - Wednesday, January 08, 2014 - link

    Very good analysis. Intel is behind in iGPU area, but not by much. HD Graphics 4600 and Iris Graphics show that Intel does have good potential for developing a good iGPU. If the Kaveri APU concept does take off, Intel won't take very long time to respond. Maybe just a year. And in the end, APU gamers is just a fraction of a PC gamer market, and PC gamer market is just a fraction of the overall PC market. Basically, the AMD APUs are marketed to folks who care enough about gaming to want an APU instead of Intel Core, but not enough to be willing to buy a discrete video card. How many are there? To be honest, a $170 APU that alone can power a capable gaming rig is intriguing, but I dount the leap in performance will be huge compared to Richland A10.

    Another hurdle that AMD needs to overcome is Intel's aggressive pricing. In the end, the $130 Core i3 is a pretty damn good CPU for most folks who just run productivity and a few multimedia apps. The A10 APU would have been a steal if AMD offered it at the same price as Core i3. But at $160-170, the "productivity" folks will walk away with the Core i3, while many gamers will scratch their head about buying this APU vs Intel Core I3 with dedicated low-end GPU. Power users will skip both and jump straight to Core i5 or i7.
  • yankeeDDL - Wednesday, January 08, 2014 - link

    If I may comment here ...
    Intel's GPU offering is comparable to AMD only in terms of Iris Pro. Everything else is much behind (
    The i7-4850HQ is > 300usd more expensive than the 5800K, and I don't quite buy the yearly cost estimates: unless you play 24/7, the delta cost in electricity is going to be much lower.
    There are other savings in AMD's platform at MoBo level, for various reasons, so the saving doesn't only come from the CPU.
    If Mantle really provides the 45% improvement it is going to make the difference even larger.
    Basically, except for the few Iris Pro parts, it really makes no sense to compare Intel with AMD's APUs, from a GPU's perspective, while a 20% penalty on the CPU side seems much more manageable.
    If (and it's a big "if") the HSA really gets some traction at software level, the CPU's shortcomings can be easily offset by the GPU.
    If you think about it, already today most intensive every-day apps are already leveraging the GPU (web browing, spreadsheets, Flash/Silverlight and even image processing).
    So for everyday tasks I doubt anyone can really see any speed advantage comparing a ~$130 AMD APU with a ~$500 Intel APU. Throw in there the gaming advantage from AMD's platform and I see a fairly decent prospect for AMD. Of course, they need to execute: there have to be some decent PC/laptop offerings with good APUs and balanced configurations (no more of those 2GB craps, please).
    The real questions, from where I sit, are: does HSA really work as expected? Is the gap to Hashwell CPU really reduced by ~20%? Is the memory bandwidth going to bottleneck the GPU's performance, or will it really be able to hit the levels of the 7750?
    So I wouldn't say that
  • mikato - Wednesday, January 08, 2014 - link

    It begs the question why Intel doesn't bring their good integrated graphics to lower end CPUs for an appropriate price. Maybe they don't see money in it from their perspective. Joe Schmo wouldn't buy a slower CPU for a little more $, and probably the system builders see that as well. AMD has some additional reasons to do this with their long game. They will eventually flip the switch to make their superior GPUs translate into much more powerful overall APUs. They get closer with each architecture update, and with more HSA adoption. It will be interesting to see how things play out. We'll certainly know more come Excavator.
  • UtilityMax - Wednesday, January 08, 2014 - link

    What I said is that Intel has more than enough in-house capability to build a good iGPU. If APU market really takes off, which is not given, Intel has the capacity to respond very fast. And Intel is not that much behind right now. The A10-6800K APU is on average about 30% faster than HD Graphics 4600. So the rift is there, but it isn't that bit. AMD said previously that the Steamroller APU will improve GPU performance by 20-30%. So if you believe AMD's words, then the Kaveri A10s may be faster than Intel HD Graphics by some 50% on average. As for Mantle, it probably will take a long time to take off and become mainstream. Only new titles will use it, but I suspect people who intend to play on APUs have in mind older games as well, and those may not benefit from Maltle. It's possible that Intel will try to respond. It's also possible that the APU market will remain so small that Intel won't bother. Reply
  • SoBizarre - Wednesday, January 08, 2014 - link

    Can it do 8 taps Jinc in MadVR? Reply
  • jdietz - Wednesday, January 08, 2014 - link

    AMD has the right idea allocating most of the die to the GPU. Intel would destroy AMD if they did the same. Any ideas as to why Intel doesn't allocate more of their dies to the GPU? Reply
  • UtilityMax - Wednesday, January 08, 2014 - link

    Probably because most users and hardware vendors don't care. A bulk of enterprise PC users don't care for playing games or running multimedia apps either on desktops or laptops. Likewise, a lot of people don't play games on PCs these days. Those who do play on PCs are split into groups, those who buy a separate video card and those who don't. So IMO, so far the APU user market is still kind of small. AMD is hyping it and hoping that APUs will take off big time. After all, AMD probably has an advantage over Intel in the GPU area. Although, I suspect Intel will be able to respond to AMD if they really have to. Reply
  • abufrejoval - Thursday, January 09, 2014 - link

    You can't do 1080p or better out of DDR3, no matter how much die space you give to the GPU: It's not a GPU limitation but a bandwidth issue. That's the whole problem with the APUs, too.
    Of course once you were to put the entire GPU DRAM on die and only stream out video, that will change. Don't know when that will happen, but perhaps not that far off.
  • BMNify - Monday, January 13, 2014 - link

    sooner rather than later it seems they (and Intel) will also have to use the lower latency/power 512bit Wide IO 2 option(4x128bit channels of DDR3 ) as 4K Rec 2020 and finally the real 8K UHDTV real colour Rec 2020 spec comes around in the 2016-2020 timeline...
  • luism - Wednesday, January 08, 2014 - link

    How's Linux support? Reply
  • BMNify - Monday, January 13, 2014 - link

    "luism :How's Linux support?"
    pretty crap apparently for any radeonsi device to date

    the SteamOS Linux initative seems to be pushing AMD to provide something workable but as usual the perpetual "the next one will be better" is prevalent .

    as a basic comparison the best current Linux i could find was these 3rd party after market cards

    as this makes very clear ,obviously radeon 7 and above have some serious bottlenecks in their paths and extrapolating to the kaveri SoC's with their single 128bit bus to gfx core it probably wont end well here but we shall see soon.
  • BMNify - Monday, January 13, 2014 - link

    luism , you might find this Linux GL talk interesting
  • Laststop311 - Friday, January 10, 2014 - link

    Excited to see what kaveri can bring for the HTPC market. Can we play pc games at 1920x1080 in our living room with decent quality settings decent form factor and decent noise profile? Reply
  • BMNify - Monday, January 13, 2014 - link

    No. Reply
  • BMNify - Monday, January 13, 2014 - link

    you can however get lots of so called "android tv box" today cheap today , a quad core box for £50 you can control from any android phone/tablet for instance rather than the under powered single core google Chromecast for instance
    "Quad Core Android 4.2 TV Box (MINI PC) "ATV" with 1.8GHz CPU, 2GB RAM, Full HD Output, HDMI DLNA WIFI 8GB HI718"

    OC if you are not in a hurry as such then you are far better looking for the new Octacore Arm cortex with integrated UHD1 real colour Rec 2020 spec decoder as standard to give you more option later on....

    ...odd thing about AMD right now and even since they announced working with ARM IP is that they could actually bypass this limiting single channel 128bit interconnect to the gfx core and simply use the existing older ARM CoreLink CCN-504 Cache Coherent Network IP delivering up to one Terabit (128 GigaBytes/s) of usable system bandwidth per second in their latest APU's along side their existing arm IP licence

    they get far better Cache Coherence with massive extra data throughput capabilities potential than today's APU data throughput for almost free ( a few pennies for the extra IP licence) but they probably wont, never mind them using the far better current CoreLink CCN-508 that can deliver up to 1.6 terabits of sustained usable system bandwidth per second with a peak bandwidth of 2 terabits per second (256 GigaBytes/s) at processor speeds scaling all the way up to 32 processor cores total..... plus some super low power and fast wide IO 2 ram as icing on the cake for 2014....
  • saneblane - Tuesday, January 14, 2014 - link

    Where is the official launch review for Kaveri? Normally you guys do Nvidia and Intel reviews at midnight, I guess Amd can't get the same kinda love around here. Reply
  • lisavidal - Sunday, January 26, 2014 - link

    Its really cool! Reply

Log in

Don't have an account? Sign up now