• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

Back in January when AMD launched their first Kaveri APUs, we tested the A10-7850K and the A8-7600, with the former being at the top of the product stack featuring two Steamroller modules and 512 GCN cores with a 4.0 GHz turbo frequency. This part, along with the A10-7700K that was also released at the same time, has a nominal TDP of 95W. The interesting element in the mix was the 65W A8-7600, which AMD provided as a sample to review at the time, but was to be released ‘at a future date’. Today is that date, six months after the initial reviews.

AMD’s reason for the delay revolves around the 65W nature of the APUs but also their configurable TDP element. Rather than launch a new APU every two months, they combined all three in an effort to get out this new message that the 65W APUs can all be adjusted to fit within a 45W TDP by reducing the clock speeds.

When we examined the A8-7600 at 45W, we found that the killer application for this APU would be in the integrated graphics segment, where it offered some of the best processor graphics for power consumption on the market. The other two APUs being releases today, the A10-7800 and the A6-7400K, both aim to continue that trend above and below the A8 market.

I am currently waiting for the full specifications for these APUs from AMD, including memory support as well as core counts/frequencies of the processor graphics.

Users will note that the listed price for the A8-7600 has been reduced from the initial review by AMD in order to align the stack better for price against performance. All three new APUs will register as 65W to begin with, and the user will have to enable 45W mode in the BIOS of the motherboard. Enabling the 45W mode corresponds to an ~400-500 MHz drop of full loading frequency while still enabling a high turbo:

AMD quotes a 6-7% drop in performance in PCMark 8 and 3DMark by moving down to the 45W TDP mode, with SFF or low power systems seeing the most benefit.

The technologies that were part of the first Kaveri APU launch are also present with the 65W models, including the Heterogeneous System Architecture (HSA), Unified Memory for both CPU and GPU (hUMA), heterogeneous queuing of kernels (hQ), Graphics Core Next (GCN) with Mantle and AMD TrueAudio.

With the OpenCL support, AMD is keen to express their performance benefits in Adobe Photoshop CC (A10 vs i5), LibreOffice (A8 vs i3) and JPEG Decode (A6 vs Pentium). AMD also points out in its release that PowerDVD 14 is fully supporting HEVC compute via OpenCL on AMD APUs, with also AMD Fluid Motion Video in a later update.

AMD is running a promotion for the A10 series of Kaveri APUs during August through to October – purchase an A10 APU and choose from either a full copy of Thief, Sniper Elite III or Murdered Soul Suspect. This offer will be available in North America, Latin America, EMEA and Asia Pacific/Japan.

We have the A10-7800 APU in for testing, be sure to look out for that review. We have asked for an A6-7400K sample, which allows overclocking, and it will be interesting to see how the single module SKU stacks up against the Pentium CPUs we recently tested.

POST A COMMENT

7 Comments

View All Comments

  • britjh22 - Thursday, July 31, 2014 - link

    Any word on actual vendor availability? Particularly for the A8-7600, which was the most interesting component for myself and many others at the initial Kaveri launch (even more so now that list price is reduced). This is the chip I'd probably use for all build for family & friends due to it's great balance. Reply
  • konroh77 - Thursday, July 31, 2014 - link

    Agreed! I have done several builds that would have been nice with the A8-7600, and I either had to go with an older APU, or an i3 depending on the purpose of the machine. Would be nice if I could actually buy one of these chips (6 months+ after it was announced)! Reply
  • Stuka87 - Thursday, July 31, 2014 - link

    Little misleading how they lump CPU and GPU cores together like that. But out side of that, pretty excited for these. Reply
  • Death666Angel - Thursday, July 31, 2014 - link

    Totally misleading how they tell you exactly what is what in their core count. Reply
  • mickulty - Thursday, July 31, 2014 - link

    They do refer to them as *compute* cores, and make the breakdown clear. Also they break down the GPU fairly, rather than quoting the number of shaders (512 on the 7850k). How else could they be expected to emphasise the parallel computing capabilities of their APUs when HSA is used properly? Reply
  • name99 - Thursday, July 31, 2014 - link

    I'm more interested in how close do these come to the real promise of HSA. Are we there, or are we still on the way there? In particular
    - shared address space? If I pass a pointer from the CPU to the GPU will it just work?
    - transparent coherence between CPU and GPU?
    - interrupt support on the GPU? (So I can time slice the GPU like a normal OS/CPU combo)
    Reply
  • name99 - Thursday, July 31, 2014 - link

    OK, after some further reading around I see that the answer to the first two questions is yes --- memory IS finally done right here. I assume there is some protocol under the covers that "reflects" GPU MMU faults (and perhaps TLB misses?) to the CPU to handle the problem, rather than having the GPU deal with that, but that's an implementation detail.

    I still don't know what the interrupt support on the GPU side is. The interrupt side is important because once it is ALSO in place, we get to a very interesting situation where the developer+OS can basically treat GPUs as juts like CPUs, only running a different ISA. The developer can spawn threads, or enqueue tasks, that are targeted at the GPU ISA rather than the x86 ISA and everything will just work. (We have this already).
    AND the OS will be able to schedule long-running tasks between GPUs, time-sharing them as necessary. (This is the part that I'm not sure is in place yet. Obviously context-switching a GPU is a more heavy-weight operation than context-switching a CPU, but you want it to be possible otherwise you have to enforce artificial restrictions on the code that runs on the GPU to ensure it doesn't run too long --- and the whole point is to get rid of these restrictions.)
    Reply

Log in

Don't have an account? Sign up now