The GPU

AMD making the move from VLIW4 to the newer GCN architecture makes a lot of sense. Rather than being behind the curve, Kaveri now shares the same GPU architecture as Hawaii based GCN parts; specifically the GCN 1.1 based R9-290X and 260X from discrete GPU lineup. By synchronizing the architecture of their APUs and discrete GPUs, AMD is finally in a position where any performance gains or optimizations made for their discrete GPUs will feed back into their APUs, meaning Kaveri will also get the boost and the bonus. We have already discussed TrueAudio and the UVD/VCE enhancements, and the other major one to come to the front is Mantle.

The difference between the Kaveri implementation of GCN and Hawaii, aside from the association with the CPU in silicon, is the addition of the coherent shared unified memory as Rahul discussed in the previous page.

AMD makes some rather interesting claims when it comes to the gaming market GPU performance – as shown in the slide above, ‘approximately 1/3 of all Steam gamers use slower graphics than the A10-7850K’. Given that this SKU is 512 SPs, it makes me wonder just how many gamers are actually using laptops or netbook/notebook graphics. A quick look at the Steam survey shows the top choices for graphics are mainly integrated solutions from Intel, followed by midrange discrete cards from NVIDIA. There are a fair number of integrated graphics solutions, coming from either CPUs with integrated graphics or laptop gaming, e.g. ‘Mobility Radeon HD4200’. With the Kaveri APU, AMD are clearly trying to jump over all of those, and with the unification of architectures, the updates from here on out will benefit both sides of the equation.

A small bit more about the GPU architecture:

Ryan covered the GCN Hawaii segment of the architecture in his R9 290X review, such as the IEEE2008 compliance, texture fetch units, registers and precision improvements, so I will not dwell on them here. The GCN 1.1 implementations on discrete graphics cards will still rule the roost in terms of sheer absolute compute power – the TDP scaling of APUs will never reach the lofty heights of full blown discrete graphics unless there is a significant shift in the way these APUs are developed, meaning that features such as HSA, hUMA and hQ still have a way to go to be the dominant force. The effect of low copying overhead on the APU should be a big break for graphics computing, especially gaming and texture manipulation that requires CPU callbacks.

The added benefit for gamers as well is that each of the GCN 1.1 compute units is asynchronous and can implement independent scheduling of different work. Essentially the high end A10-7850K SKU, with its eight compute units, acts as eight mini-GPU blocks for work to be carried out on.

Despite AMD's improvements to their GPU compute frontend, they are still ultimately bound by the limited amount of memory bandwidth offered by dual-channel DDR3. Consequently there is still scope to increase performance by increasing memory bandwidth – I would not be surprised if AMD started looking at some sort of intermediary L3 or eDRAM to increase the capabilities here.

Details on Mantle are Few and Far Between

AMD’s big thing with GCN is meant to be Mantle – AMD's low level API for game engine designers intended to improve GPU performance and reduce the at-times heavy CPU overhead in submitting GPU draw calls. We're effectively talking about scenarios bound by single threaded performance, an area where AMD can definitely use the help. Although I fully expect AMD to eventually address its single threaded performance deficit vs. Intel, Mantle adoption could help Kaveri tremendously. The downside obviously being that Mantle's adoption at this point is limited at best.

Despite the release of Mantle being held back by the delay in the release of the Mantle patch for Battlefield 4 (Frostbite 3 engine), AMD was happy to claim a 2x boost in an API call limited scenario benchmark and 45% better frame rates with pre-release versions of Battlefield 4. We were told this number may rise by the time it reaches a public release.

Unfortunately we still don't have any further details on when Mantle will be deployed for end users, or what effect it will have. Since Battlefield 4 is intended to be the launch vehicle for Mantle - being by far the highest profile game of the initial titles that will support it - AMD is essentially in a holding pattern waiting on EA/DICE to hammer out Battlefield 4's issues and then get the Mantle patch out. AMD's best estimate is currently this month, but that's something that clearly can't be set in stone. Hopefully we'll be taking an in-depth look at real-world Mantle performance on Kaveri and other GCN based products in the near future.

Dual Graphics

AMD has been coy regarding Dual Graphics, especially when frame pacing gets plunged into the mix. I am struggling to think if at any point during their media presentations whether dual graphics, the pairing of the APU with a small discrete GPU for better performance, actually made an appearance. During the UK presentations, I specifically asked about this with little response except for ‘AMD is working to provide these solutions’. I pointed out that it would be beneficial if AMD gave an explicit list of paired graphics solutions that would help users when building systems, which is what I would like to see anyway.

AMD did address the concept of Dual Graphics in their press deck. In their limited testing scenario, they paired the A10-7850K (which has R7 graphics) with the R7 240 2GB GDDR3. In fact their suggestion is that any R7 based APU can be paired with any G/DDR3 based R7 GPU. Another disclaimer is that AMD recommends testing dual graphics solutions with their 13.350 driver build, which due out in February. Whereas for today's review we were sent their 13.300 beta 14 and RC2 builds (which at this time have yet to be assigned an official Catalyst version number).

The following image shows the results as presented in AMD’s slide deck. We have not verified these results in any way and are only here as a reference from AMD.

It's worth noting that while AMD's performance with dual graphics thus far has been inconsistent, we do have some hope that it will improve with Kaveri if AMD is serious about continuing to support it. With Trinity/Richland AMD's iGPU was in an odd place, being based on an architecture (VLIW4) that wasn't used in the cards it was paired with (VLIW5). Never mind the fact that both were a generation behind GCN, where the bulk of AMD's focus was. But with Kavari and AMD's discrete GPUs now both based on GCN, and with AMD having significantly improved their frame pacing situation in the last year, dual graphics is in a better place as an entry level solution to improving gaming performance. Though like Crossfire on the high-end, there are inevitably going to be limits to what AMD can do in a multi-GPU setup versus a single, more powerful GPU.

AMD Fluid Motion Video

Another aspect that AMD did not expand on much is their Fluid Motion Video technology on the A10-7850K. This is essentially using frame interpolation (from 24 Hz to 50 Hz / 60 Hz) to ensure a smoother experience when watching video. AMD’s explanation of the feature, especially to present the concept to our reader base, is minimal at best: a single page offering the following:

A Deep Dive on HSA The Kaveri Socket and Chipset Line Up: Today and Q1, No Plans for FX or Server(?)
Comments Locked

380 Comments

View All Comments

  • SofS - Wednesday, January 22, 2014 - link

    Following your links and looking around I found:
    http://www.tomshardware.com/reviews/core-memory-sc...

    It links to previous similar articles concerning the Phenon II and the i7 of the time (975). Seems that indeed the C2Q does not benefit much from memory improvements compared to the other two, but there is a difference. This and all of those three cases are relevant since all three models were very popular. Also, I remember choosing the on time smaller modules for my first kit whit this particular system since they were the only reasonable DDR3 modules at 1600 within reach, albeit I never managed to stabilize it at CL6. On the other hand the latter I upgraded with got CL6 from XMP since the beginning while being larger. Given that memory is very cheap compared to the whole system plus the cost of repurchasing non portable software then this (maybe also a new GPU) might just be the final push needed to wait for the next generation native DDR4 systems for many.
  • fokka - Tuesday, January 14, 2014 - link

    i understand your sentiment, but then again, about every modern mainstream cpu should destroy a c2d and even quad in raw performance. and you even get relatively capable integrated graphics included in the package, so about everyone even moderately interested in computing performance and efficiency "should bite the bullet" if he's got a couple hundred bucks on the side.
  • just4U - Wednesday, January 15, 2014 - link

    and that's the problem.. their not. "It's good enough" Numbers are.. just that numbers. We hit a wall in 2008 (or there abouts..) and while performance kept increasing it's been in smaller increments. Over the span of several generations that really can add up but not the way it once did.

    It used to be you'd get on a old system and it would be like pulling teeth because the differences were very noticeable and in some cases they still are.. but for the most part? Not so much.. not for normal/casual usage. There is a ceiling .. Athlon X2s P4s? No.. you'll notice it.. Quad 8x Core2? hmmm.. How about a socket 1366 cpu or the 1156 stuff? Or the PIIs from AMD. Those people should upgrade? Certainly if their board dies and they can't replace.. but otherwise not so much.
  • just4U - Wednesday, January 15, 2014 - link

    That should have read Quad 8x series Core2s.. anyway these days It seems like we do a lot more change out video, add in ssd, increase ram, rather then build systems from the ground up as systems can stick around longer and still be quite viable. Yes/no?
  • tcube - Thursday, January 16, 2014 - link

    Totaly agree. We're led to believe that we need to upgrade every 2 years or so... yet a great many are still using old cpu's even dual cores with new software and os without a care in the world. Because there is no noticeable improvement in cpu usage. Cpu power became irrelevant after C2Q nothing beyond that power is justifiable in normal home or office usage. Certainly certain professional users will want a cheap workstation and will buy into the highend pc market likewise extreme gamers or just for bragging rights. But thinking that for anything from browsing to medium photoshop usage or any moderate videoediting software use will REQUIRE anything past a quadcore like lowend i5's or this kaveri is plain false. You will however notice the lack of a powerful gpu when gaming or doing other gpu intensive tasks... so amd has a clear winner here.

    I do agree it's not suited for heavy x86 work... but honestly... most software stacks that previously relied heavily on cpu are moving to opencl to get a massive boost from the gpu... photoshop being just one of many... so yeah the powerful gpu on kaveri is a good incentive to buy, the x86 performance is better then richland which is sufficient for me(as i currently do use a richland cpu) so...
  • Syllabub - Friday, January 17, 2014 - link

    I am not going to try and pick a winner but I follow your line of reasoning. I have a system with a e6750 C2D and Nvidia 9600 that still gets the job done just fine. It might be described as a single purpose type of system meaning I ask it to run one or possibly two programs at the same time. What I think is pretty wild is that when I put it together originally I probably sank something close to $250 into the CPU and GPU purchase while today I potentially get similar performance for under $130 or so. The hard part is buying today in a manner that preserves a level of performance equivalent to the old system; always feel the tug to bump up the performance ladder even if I don't really need it.
  • Flunk - Thursday, January 16, 2014 - link

    That doesn't really make sense unless you also include equivalently-priced current Intel processors. People may be moving on from Core 2s but they have the opportunity to buy anything on the market right now, not just AMD chips.
  • PPB - Tuesday, January 14, 2014 - link

    Adding a $350 CPU plus $50 GPU to a iGP gaming comparison = Anandtech keeping it classy.
  • MrSpadge - Tuesday, January 14, 2014 - link

    You do realize they're not recommending this in any way, just showing the full potential of a low-end discrete GPU which wouldn't be bottlenecked by any modern 3+ core CPU?
  • Homeles - Tuesday, January 14, 2014 - link

    PPB being an ignorant critic, as usual.

    "For reference we also benchmarked the only mid-range GPU to hand - a HD 6750 while connected to the i7-4770K."

Log in

Don't have an account? Sign up now