The GPU

AMD making the move from VLIW4 to the newer GCN architecture makes a lot of sense. Rather than being behind the curve, Kaveri now shares the same GPU architecture as Hawaii based GCN parts; specifically the GCN 1.1 based R9-290X and 260X from discrete GPU lineup. By synchronizing the architecture of their APUs and discrete GPUs, AMD is finally in a position where any performance gains or optimizations made for their discrete GPUs will feed back into their APUs, meaning Kaveri will also get the boost and the bonus. We have already discussed TrueAudio and the UVD/VCE enhancements, and the other major one to come to the front is Mantle.

The difference between the Kaveri implementation of GCN and Hawaii, aside from the association with the CPU in silicon, is the addition of the coherent shared unified memory as Rahul discussed in the previous page.

AMD makes some rather interesting claims when it comes to the gaming market GPU performance – as shown in the slide above, ‘approximately 1/3 of all Steam gamers use slower graphics than the A10-7850K’. Given that this SKU is 512 SPs, it makes me wonder just how many gamers are actually using laptops or netbook/notebook graphics. A quick look at the Steam survey shows the top choices for graphics are mainly integrated solutions from Intel, followed by midrange discrete cards from NVIDIA. There are a fair number of integrated graphics solutions, coming from either CPUs with integrated graphics or laptop gaming, e.g. ‘Mobility Radeon HD4200’. With the Kaveri APU, AMD are clearly trying to jump over all of those, and with the unification of architectures, the updates from here on out will benefit both sides of the equation.

A small bit more about the GPU architecture:

Ryan covered the GCN Hawaii segment of the architecture in his R9 290X review, such as the IEEE2008 compliance, texture fetch units, registers and precision improvements, so I will not dwell on them here. The GCN 1.1 implementations on discrete graphics cards will still rule the roost in terms of sheer absolute compute power – the TDP scaling of APUs will never reach the lofty heights of full blown discrete graphics unless there is a significant shift in the way these APUs are developed, meaning that features such as HSA, hUMA and hQ still have a way to go to be the dominant force. The effect of low copying overhead on the APU should be a big break for graphics computing, especially gaming and texture manipulation that requires CPU callbacks.

The added benefit for gamers as well is that each of the GCN 1.1 compute units is asynchronous and can implement independent scheduling of different work. Essentially the high end A10-7850K SKU, with its eight compute units, acts as eight mini-GPU blocks for work to be carried out on.

Despite AMD's improvements to their GPU compute frontend, they are still ultimately bound by the limited amount of memory bandwidth offered by dual-channel DDR3. Consequently there is still scope to increase performance by increasing memory bandwidth – I would not be surprised if AMD started looking at some sort of intermediary L3 or eDRAM to increase the capabilities here.

Details on Mantle are Few and Far Between

AMD’s big thing with GCN is meant to be Mantle – AMD's low level API for game engine designers intended to improve GPU performance and reduce the at-times heavy CPU overhead in submitting GPU draw calls. We're effectively talking about scenarios bound by single threaded performance, an area where AMD can definitely use the help. Although I fully expect AMD to eventually address its single threaded performance deficit vs. Intel, Mantle adoption could help Kaveri tremendously. The downside obviously being that Mantle's adoption at this point is limited at best.

Despite the release of Mantle being held back by the delay in the release of the Mantle patch for Battlefield 4 (Frostbite 3 engine), AMD was happy to claim a 2x boost in an API call limited scenario benchmark and 45% better frame rates with pre-release versions of Battlefield 4. We were told this number may rise by the time it reaches a public release.

Unfortunately we still don't have any further details on when Mantle will be deployed for end users, or what effect it will have. Since Battlefield 4 is intended to be the launch vehicle for Mantle - being by far the highest profile game of the initial titles that will support it - AMD is essentially in a holding pattern waiting on EA/DICE to hammer out Battlefield 4's issues and then get the Mantle patch out. AMD's best estimate is currently this month, but that's something that clearly can't be set in stone. Hopefully we'll be taking an in-depth look at real-world Mantle performance on Kaveri and other GCN based products in the near future.

Dual Graphics

AMD has been coy regarding Dual Graphics, especially when frame pacing gets plunged into the mix. I am struggling to think if at any point during their media presentations whether dual graphics, the pairing of the APU with a small discrete GPU for better performance, actually made an appearance. During the UK presentations, I specifically asked about this with little response except for ‘AMD is working to provide these solutions’. I pointed out that it would be beneficial if AMD gave an explicit list of paired graphics solutions that would help users when building systems, which is what I would like to see anyway.

AMD did address the concept of Dual Graphics in their press deck. In their limited testing scenario, they paired the A10-7850K (which has R7 graphics) with the R7 240 2GB GDDR3. In fact their suggestion is that any R7 based APU can be paired with any G/DDR3 based R7 GPU. Another disclaimer is that AMD recommends testing dual graphics solutions with their 13.350 driver build, which due out in February. Whereas for today's review we were sent their 13.300 beta 14 and RC2 builds (which at this time have yet to be assigned an official Catalyst version number).

The following image shows the results as presented in AMD’s slide deck. We have not verified these results in any way and are only here as a reference from AMD.

It's worth noting that while AMD's performance with dual graphics thus far has been inconsistent, we do have some hope that it will improve with Kaveri if AMD is serious about continuing to support it. With Trinity/Richland AMD's iGPU was in an odd place, being based on an architecture (VLIW4) that wasn't used in the cards it was paired with (VLIW5). Never mind the fact that both were a generation behind GCN, where the bulk of AMD's focus was. But with Kavari and AMD's discrete GPUs now both based on GCN, and with AMD having significantly improved their frame pacing situation in the last year, dual graphics is in a better place as an entry level solution to improving gaming performance. Though like Crossfire on the high-end, there are inevitably going to be limits to what AMD can do in a multi-GPU setup versus a single, more powerful GPU.

AMD Fluid Motion Video

Another aspect that AMD did not expand on much is their Fluid Motion Video technology on the A10-7850K. This is essentially using frame interpolation (from 24 Hz to 50 Hz / 60 Hz) to ensure a smoother experience when watching video. AMD’s explanation of the feature, especially to present the concept to our reader base, is minimal at best: a single page offering the following:

A Deep Dive on HSA The Kaveri Socket and Chipset Line Up: Today and Q1, No Plans for FX or Server(?)
Comments Locked

380 Comments

View All Comments

  • extremesheep49 - Friday, February 21, 2014 - link

    I don't know if anyone will even see this now but...

    "The reality is quite clear by now: AMD isn't going to solve its CPU performance issues with anything from the Bulldozer family. What we need is a replacement architecture, one that I suspect we'll get after Excavator concludes the line in 2015."

    I don't know that this conclusion is very fair considering this statement if you compare it to a previous article linked below. The linked article recommends a (currently) $100 100W A8-5600K. The Kaveri equivalent is a $120 45W CPU of approximately the same performance.

    Doesn't the linked article's recommendations contradict your Kaveri conclusion at least for some cases? Kaveri's CPU performance probably is sufficient for many discrete GPU setups.

    http://anandtech.com/show/6934/choosing-a-gaming-c...

    Quote from link:
    "Recommendations for the Games Tested at 1440p/Max Settings
    A CPU for Single GPU Gaming: A8-5600K + Core Parking updates"
  • Novaguy - Sunday, February 23, 2014 - link

    Gaming performance is usually (but not always) GPU bottlenecked, not CPU bottlenecked.

    The reason why a trinity was getting recommended in a lot of gaming boxes was that in dollar limited scenarios, you'll often get better gaming performance mating a $120 quad core trinity with a $300 gpu, versus a $220 i5 with a $200 gpu.

    For even better results, mate an $80 Athlon II X4 750K if you're going discrete gpu, but I don't think the gpu-less trinity chip was available then.
  • PG - Monday, February 24, 2014 - link

    I wanted to compare Kaveri to some other cpus not in this review. Bench would be perfect for that, but the Kaveri cpus are not listed there. Why? Can they be be added?
  • Cptn_Slo - Tuesday, April 1, 2014 - link

    Well at least this shows that AMD is able to increase performance significantly given the appropriate die shrink. I'm a big Intel fan but a healthy company/market needs competition, and looks like AMD is able to offer that in at least some areas.
  • zobisch - Wednesday, April 2, 2014 - link

    I have an h60 cooler on my 7850k with 2400mhz ram OC'd to 4.4ghz and I love it... I think the corner for APU's will really turn when DDR4 boards come out. I also would like to see an 8core, 24 compute gpu as well but that's probably a die shrink or more away.
  • vickfan104 - Tuesday, May 6, 2014 - link

    An Xbox One/PS4-like APU is what I'm still looking for from AMD. To me, that seems like the point where an APU becomes truly compelling as opposed to CPU + discreet GPU.
  • P39Airacobra - Thursday, January 1, 2015 - link

    I still can't understand why anyone would be insane enough to pay the outrages high price for a AMD APU simply because it has a built in GPU that can play some games! When for the same price you can get a high end i5 CPU and mid range GPU for a few dollars more! Or for the exact same price you can get a AMD quad and a mid range GPU. Either choice would bloaw any AMD APU out of the water! Yes you can crossfire the APU, But you can also crossfire and SLI regular GPU's. Besides by the time you paid the money for a AMD APU and a GPU to crossfire with it you could have got a nice i3 or FX 6300 or even a last gen IVY i5 with a GPU like a R9 270 or a GTX 660. And either one of those would blow away a APU/Crossfire setup! What are you people thinking? I swear people today would not only buy the Brooklyn bridge once but more than once!
  • P39Airacobra - Thursday, January 1, 2015 - link

    Most logical thing to do is buy FX-6300 for $119 and a Motherboard for $60 and then buy a GTX 660 or a R9 270 and buy a 1600x900 resolution monitor and then you will be able to max out anything.
  • P39Airacobra - Thursday, January 1, 2015 - link

    Besides 60fps on Medium/High at only 1280x1024 is a laugh! A GT 740 and a R7 250 can do better than that!
  • kzac - Monday, February 16, 2015 - link

    After living with the processor on a gigabyte main board for the past several months, I can honestly say its bested both the core i3 and i5 systems (some quad core) I have used in the past. What it might not score on benchmarks for all out throughput it makes up for in its multitasking capability. What normally crashes my i3 and makes my i5 struggle while multitasking (many things open and operating at the same time), doesn’t tend to effect the A10 APU. The core i3 i am using is the later 3220 chip which completely chokes with anything above average multitasking under W7pro, even though it has 12 gig of 1600 ram. The core i5 was better at multitasking than the core i3 but still not near as effective at multitasking as the AMD A10 7850. Where I cannot speak to the performance of the AMD A10 for gaming, for multitasking is very effective.
    For gaming I have used the FX series AMD processors, both Quad Core and 8 core.

Log in

Don't have an account? Sign up now