Kaveri and Counting Cores

With the move towards highly integrated SoCs we've seen a variety of approaches to core counts. Apple, Intel and Qualcomm still count CPU cores when advertising an SoC. For Apple and Qualcomm that's partially because neither company is particularly fond of disclosing the configuration of their GPUs. More recently, NVIDIA took the somewhat insane stance of counting GPU CUDA cores on its Tegra K1 SoC. Motorola on the other hand opted for the bizarre choice of aggregating CPU, GPU and off-die companion processors with the X8 platform in its Moto X smartphone. Eventually we will have to find a way to characterize these highly integrated SoCs, particularly when the majority of applications actually depend on/leverage both CPU and GPU cores.

AMD finds itself in a unique position with Kaveri where it has a truly unified CPU/GPU architecture and needs to establish a new nomenclature for use in the future. With 47% of the Kaveri die dedicated for GPU use, and an architecture that treats both CPU and GPU as equals, I can understand AMD's desire to talk about the number of total cores on the APU.

AMD settled on the term "Compute Core", which can refer to either an x86 (or maybe eventually ARM) CPU core or a GCN compute unit. The breakdown is as follows:

  • Each thread on a CPU is a Compute Core
  • Each Compute Unit on the IGP is a Compute Core
  • Total Compute Cores = CPU Compute Cores + IGP Compute Cores

This means that the high end SKU, the A10-7850K will have a total of 12 compute cores: four from the CPU (two Steamroller modules supporting four threads) and eight from the IGP (due to eight compute units from the R7 graphics).

There are some qualifications to be made on this front. Technically, AMD is correct – each compute unit in the IGP and each thread on the CPU can run separate code. The Hawaii GCN architecture can spawn as many kernels as compute units, whereas a couple of generations ago we were restricted to one compute kernel on the GPU at once (merely with blocks of work being split across the CUs). However, clearly these 12 compute units are not equivalent: a programmer will still have to write code for the CPU and GPU specifically in order to use all the processing power available.

Whenever AMD (or partners) are to promote the new APUs, AMD tells us clearly that two sets of numbers should be quoted in reference to the Compute Cores – the total, and the breakdown of CPU/GPU on the APU. Thus this would mean that the A10-7850K APU would be marketed at a “12 Compute Core” device, with “(4 CPU + 8 GPU)” following immediately after. I applaud AMD's decision to not obfuscate the internal configuration of its APUs. This approach seems to be the most sensible if it wants to tout the total processing power of the APU as well as tell those users who understand a bit more what the actual configuration of the SoC is. The biggest issue is how to address the users who automatically assume that more cores == better. The root of this problem is very similar to the old PR-rating debates of the Athlon XP. Explaining to end users the intracacies of CPU/GPU programming is really no different than explaining why IPC * frequency matters more than absolute frequency.

When a programmer obtains an APU, the OpenCL profiler should locate the eight compute units of the GPU and display that to the user for offloading compute purposes; at the same time it is up to the programmer to leverage the threads as appropriately as possible, even with AMD’s 3rd generation Bulldozer modules implementing a dual-INT + single-FP solution.

At launch, AMD will offer the following configurations:

  • A10-7850K: 12 Compute Cores (4 CPU + 8 GPU)
  • A10-7700K: 10 Compute Cores (4 CPU + 6 GPU)
  • A8-7600: 10 Compute Cores (4 CPU + 6 GPU)

The problem with distilling the processing power of the APU into a number of compute cores is primarily on the CPU side. AMD will be keeping the GPU at approximately the same clock (720 MHz for these three) but the CPU frequency will differ greatly, especially with the A8-7600 which has configurable TDP and when in 45W mode will operate 300-400 MHz less.

Steamroller Architecture Redux

We've already documented much of what Steamroller brings to the table based on AMD's Hot Chips presentation last year, but I'll offer a quick recap here.

Steamroller brings forward AMD's Bulldozer architecture, largely unchanged. We're still talking about a dual-core module featuring two independent integer execution cores with a single shared floating point execution core capable of executing instructions from two threads in parallel. A single module still appears as two cores/threads to the OS.

In Bulldozer and Piledriver, each integer core had its own independent scheduler but the two cores shared a single fetch and decode unit. Instructions would come in and decodeded operations would be fed to each integer pipe on alternating clock cycles. In Steamroller the decode hardware is duplicated in each module, so now each integer core gets its own decode unit. The two decode units are shared by the one FP unit.

L1 instruction caches increase in size from 64KB to 96KB per module, which AMD claims reduces misses by up to 30%. There's an updated branch predictor which is responsible for reducing mispredicted branches by up to 20%. Both integer and FP register files grow in size as well as an increase in the size of the scheduling window, a combination of which increases the number of dispatches per thread by up to 25%.

There are huge improvements on the store side. Steamroller can now issue up to 2 stores at the same time compared to 1 in Bulldozer/Piledriver. The load/store queue sizes go up by around 20% as well. It's remarkable just how much low hanging fruit there was in the Bulldozer design.

GCN in an APU

The integrated graphics solution on Trinity/Richland launched with a Cayman-derived VLIW4 architecture, which unfortunately came just shortly after desktop side of the equation finished moving from VLIW5/VLIW4 to GCN. Having a product stack with largely different GPU architectures doesn't help anyone, particularly on the developer front. Looking forward, sticking with GCN was the appropriate thing to do, as now Kaveri is using GCN, the same architecture found in AMD’s high end R9-290X GPU, based on the Hawaii platform.

This enabled AMD to add in all the features they currently have on Hawaii with little to no effort – the TrueAudio DSP, the upgraded Video Coding Engine and Unified Video Decoder are such examples. Whether or not AMD decides to develop an APU with more than 8 GCN CUs is another matter. This is a point we've brought up with AMD internally and one that I'm curious about - do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community.

Kaveri: Aiming for 1080p30 and Compute Llano, Trinity and Kaveri Die: Compared
Comments Locked

380 Comments

View All Comments

  • nader_21007 - Saturday, January 18, 2014 - link

    Can you show me what improvement haswell did over previous gen? TDP going from 77W to 84W, meanwhile performance droped in most cases. Can't you see the charts in this review?
  • Principle - Tuesday, January 14, 2014 - link

    Andrew, that depends based on size, budget, etc...and I own an AMD Piledriver CPU and could never tell you when it was supposedly slower, maybe a game takes a couple seconds longer to load, but after that its all the same.

    And I have used Intel CPUs too, and have hiccups and lag multitasking with them in real life, that never happens on my AMD systems. If you get an i5 and an AMD GPU, that would be great and last with the GPU compute advantage of AMD GPUs and the Mantle potential.

    These Kaveri have a lot of value at launch for the entertainment center PCs, or ITX platforms because at 65W or even 45W it delivers a lot of performance in one chip that you can keep cool and quiet in a small package. Also good for all in one PCs built into the monitor. Not for the avid gamer right now, but a little more future proof than an Intel CPU in my opinion.
  • ImSpartacus - Thursday, January 16, 2014 - link

    If you're not gaming, is it really that hard to "future-proof" your CPU?

    I feel like most low end CPUs will perform "basic" non-gaming tasks for many years to come.
  • andrewaggb - Tuesday, January 14, 2014 - link

    To be clear, I'd get an i5 quad core with a 260x or 270x. I realize they aren't at all in the same price range, but it's good performance per dollar.

    I was expecting Kavari to have 10% better cpu performance and 25% better gpu performance. This has equal cpu performance and essentially equal gpu performance. It has other improvements, but that's a serious dissappointment on the performance side of things.

    I've already got 3 i5 quad cores with a 6870, 7850, and 270x in each and I'm happy with them. Just though Kaveri might be good enough, and it is for older stuff and minecraft and whatnot.
    But it seems like yet another year that paying the extra money and having some longevity is going to be the right move.

    Quite frankly my oldest system, the i5 750 with a 6870 would mop the floor with kaveri in everything but power consumption.
  • yankeeDDL - Wednesday, January 15, 2014 - link

    You're kidding right?
    It practically doubled the performance per watt of Richland (45W Kaveri almost always outpaces 100W Richland) and that's disappointing?
    It's true that Richland was way behind, but the improvement is massive.
    There's still a glaring gap with Intel's CPU, but it is smaller.
    Just as much as the glaring gap on the GPU side (but this time on AMD's favor) got wider.
    HSA is the key for AMD to push the GPU advantage over to the CPU to compensate. If it works, then Kaveri will be really up to, or better of the core I5 which cost more than 2X ... "IF" ...
  • Jaybus - Thursday, January 16, 2014 - link

    I'm not convinced HSA is the future. It is a diminishing returns issue. The only difference between HSA and SMP is different types of cores are being used. The bus arbitration and coherency issues are exactly the same. Neither is scalable to dozens of cores, let alone hundreds. HSA has the same limitations as SMP. Something like Knights Corner's ring bus and message passing is more likely the future. Near term, there is an advantage to HSA. Long term will rely on a much faster chip-to-chip interconnect to transfers and segmented memory to avoid the arbitration and coherency issues. CMOS silicon photonics maybe. That would enable optical busses orders of magnitude faster than PCIe, or in fact much faster than any chip-to-chip electronic bus, and that would make something like Knights Corner's ring bus the future path to high core counts.
  • jimjamjamie - Thursday, January 16, 2014 - link

    A genuinely interesting and insightful comment, thanks.
  • artk2219 - Tuesday, January 14, 2014 - link

    Until you play a game that uses more than 2 threads, or have tasks running in the background while gaming, then you'll wish you had those two extra threads. Seriously I wish people would quite trying to recommend dual cores for gaming or even general use, unless its in a machine for the type of person that only does one or two things thing at a time. Dual cores are showing their age now, its only going to be worse a year or two from now. Also why would you spend 90 on a Pentium dual core when you could spend 80 on an Athlon 750k or that same 90 on a 760k. They have similar single thread performance and stomp the g2120 in multithreaded situations, plus they're unlocked so you can overclock to your hearts content. Im not saying that Kaveri isn't overpriced right now, they could stand to drop 20 dollars for the top two chips and 10 for the last chip reviewed. But they just launched and those prices will change, and in the end its easier to point people to one part for all of their needs than it is to point them to two.

    http://www.newegg.com/Product/ProductList.aspx?Sub...
  • Nagorak - Wednesday, January 15, 2014 - link

    The Intel processors are more energy efficient. That's one reason.
  • artk2219 - Wednesday, January 15, 2014 - link

    Fair enough, but its a negligible difference once you factor in the discrete GPU that you would be pairing it with anyways. Cooling it shouldn't be anymore of a problem than cooling the same setup with the DGPU, granted there aren't really any fm2+ itx boards so that may be a problem if you're going for a tiny size, but thats about it.

Log in

Don't have an account? Sign up now