Kaveri and Counting Cores

With the move towards highly integrated SoCs we've seen a variety of approaches to core counts. Apple, Intel and Qualcomm still count CPU cores when advertising an SoC. For Apple and Qualcomm that's partially because neither company is particularly fond of disclosing the configuration of their GPUs. More recently, NVIDIA took the somewhat insane stance of counting GPU CUDA cores on its Tegra K1 SoC. Motorola on the other hand opted for the bizarre choice of aggregating CPU, GPU and off-die companion processors with the X8 platform in its Moto X smartphone. Eventually we will have to find a way to characterize these highly integrated SoCs, particularly when the majority of applications actually depend on/leverage both CPU and GPU cores.

AMD finds itself in a unique position with Kaveri where it has a truly unified CPU/GPU architecture and needs to establish a new nomenclature for use in the future. With 47% of the Kaveri die dedicated for GPU use, and an architecture that treats both CPU and GPU as equals, I can understand AMD's desire to talk about the number of total cores on the APU.

AMD settled on the term "Compute Core", which can refer to either an x86 (or maybe eventually ARM) CPU core or a GCN compute unit. The breakdown is as follows:

  • Each thread on a CPU is a Compute Core
  • Each Compute Unit on the IGP is a Compute Core
  • Total Compute Cores = CPU Compute Cores + IGP Compute Cores

This means that the high end SKU, the A10-7850K will have a total of 12 compute cores: four from the CPU (two Steamroller modules supporting four threads) and eight from the IGP (due to eight compute units from the R7 graphics).

There are some qualifications to be made on this front. Technically, AMD is correct – each compute unit in the IGP and each thread on the CPU can run separate code. The Hawaii GCN architecture can spawn as many kernels as compute units, whereas a couple of generations ago we were restricted to one compute kernel on the GPU at once (merely with blocks of work being split across the CUs). However, clearly these 12 compute units are not equivalent: a programmer will still have to write code for the CPU and GPU specifically in order to use all the processing power available.

Whenever AMD (or partners) are to promote the new APUs, AMD tells us clearly that two sets of numbers should be quoted in reference to the Compute Cores – the total, and the breakdown of CPU/GPU on the APU. Thus this would mean that the A10-7850K APU would be marketed at a “12 Compute Core” device, with “(4 CPU + 8 GPU)” following immediately after. I applaud AMD's decision to not obfuscate the internal configuration of its APUs. This approach seems to be the most sensible if it wants to tout the total processing power of the APU as well as tell those users who understand a bit more what the actual configuration of the SoC is. The biggest issue is how to address the users who automatically assume that more cores == better. The root of this problem is very similar to the old PR-rating debates of the Athlon XP. Explaining to end users the intracacies of CPU/GPU programming is really no different than explaining why IPC * frequency matters more than absolute frequency.

When a programmer obtains an APU, the OpenCL profiler should locate the eight compute units of the GPU and display that to the user for offloading compute purposes; at the same time it is up to the programmer to leverage the threads as appropriately as possible, even with AMD’s 3rd generation Bulldozer modules implementing a dual-INT + single-FP solution.

At launch, AMD will offer the following configurations:

  • A10-7850K: 12 Compute Cores (4 CPU + 8 GPU)
  • A10-7700K: 10 Compute Cores (4 CPU + 6 GPU)
  • A8-7600: 10 Compute Cores (4 CPU + 6 GPU)

The problem with distilling the processing power of the APU into a number of compute cores is primarily on the CPU side. AMD will be keeping the GPU at approximately the same clock (720 MHz for these three) but the CPU frequency will differ greatly, especially with the A8-7600 which has configurable TDP and when in 45W mode will operate 300-400 MHz less.

Steamroller Architecture Redux

We've already documented much of what Steamroller brings to the table based on AMD's Hot Chips presentation last year, but I'll offer a quick recap here.

Steamroller brings forward AMD's Bulldozer architecture, largely unchanged. We're still talking about a dual-core module featuring two independent integer execution cores with a single shared floating point execution core capable of executing instructions from two threads in parallel. A single module still appears as two cores/threads to the OS.

In Bulldozer and Piledriver, each integer core had its own independent scheduler but the two cores shared a single fetch and decode unit. Instructions would come in and decodeded operations would be fed to each integer pipe on alternating clock cycles. In Steamroller the decode hardware is duplicated in each module, so now each integer core gets its own decode unit. The two decode units are shared by the one FP unit.

L1 instruction caches increase in size from 64KB to 96KB per module, which AMD claims reduces misses by up to 30%. There's an updated branch predictor which is responsible for reducing mispredicted branches by up to 20%. Both integer and FP register files grow in size as well as an increase in the size of the scheduling window, a combination of which increases the number of dispatches per thread by up to 25%.

There are huge improvements on the store side. Steamroller can now issue up to 2 stores at the same time compared to 1 in Bulldozer/Piledriver. The load/store queue sizes go up by around 20% as well. It's remarkable just how much low hanging fruit there was in the Bulldozer design.

GCN in an APU

The integrated graphics solution on Trinity/Richland launched with a Cayman-derived VLIW4 architecture, which unfortunately came just shortly after desktop side of the equation finished moving from VLIW5/VLIW4 to GCN. Having a product stack with largely different GPU architectures doesn't help anyone, particularly on the developer front. Looking forward, sticking with GCN was the appropriate thing to do, as now Kaveri is using GCN, the same architecture found in AMD’s high end R9-290X GPU, based on the Hawaii platform.

This enabled AMD to add in all the features they currently have on Hawaii with little to no effort – the TrueAudio DSP, the upgraded Video Coding Engine and Unified Video Decoder are such examples. Whether or not AMD decides to develop an APU with more than 8 GCN CUs is another matter. This is a point we've brought up with AMD internally and one that I'm curious about - do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community.

Kaveri: Aiming for 1080p30 and Compute Llano, Trinity and Kaveri Die: Compared
Comments Locked

380 Comments

View All Comments

  • extremesheep49 - Friday, February 21, 2014 - link

    I don't know if anyone will even see this now but...

    "The reality is quite clear by now: AMD isn't going to solve its CPU performance issues with anything from the Bulldozer family. What we need is a replacement architecture, one that I suspect we'll get after Excavator concludes the line in 2015."

    I don't know that this conclusion is very fair considering this statement if you compare it to a previous article linked below. The linked article recommends a (currently) $100 100W A8-5600K. The Kaveri equivalent is a $120 45W CPU of approximately the same performance.

    Doesn't the linked article's recommendations contradict your Kaveri conclusion at least for some cases? Kaveri's CPU performance probably is sufficient for many discrete GPU setups.

    http://anandtech.com/show/6934/choosing-a-gaming-c...

    Quote from link:
    "Recommendations for the Games Tested at 1440p/Max Settings
    A CPU for Single GPU Gaming: A8-5600K + Core Parking updates"
  • Novaguy - Sunday, February 23, 2014 - link

    Gaming performance is usually (but not always) GPU bottlenecked, not CPU bottlenecked.

    The reason why a trinity was getting recommended in a lot of gaming boxes was that in dollar limited scenarios, you'll often get better gaming performance mating a $120 quad core trinity with a $300 gpu, versus a $220 i5 with a $200 gpu.

    For even better results, mate an $80 Athlon II X4 750K if you're going discrete gpu, but I don't think the gpu-less trinity chip was available then.
  • PG - Monday, February 24, 2014 - link

    I wanted to compare Kaveri to some other cpus not in this review. Bench would be perfect for that, but the Kaveri cpus are not listed there. Why? Can they be be added?
  • Cptn_Slo - Tuesday, April 1, 2014 - link

    Well at least this shows that AMD is able to increase performance significantly given the appropriate die shrink. I'm a big Intel fan but a healthy company/market needs competition, and looks like AMD is able to offer that in at least some areas.
  • zobisch - Wednesday, April 2, 2014 - link

    I have an h60 cooler on my 7850k with 2400mhz ram OC'd to 4.4ghz and I love it... I think the corner for APU's will really turn when DDR4 boards come out. I also would like to see an 8core, 24 compute gpu as well but that's probably a die shrink or more away.
  • vickfan104 - Tuesday, May 6, 2014 - link

    An Xbox One/PS4-like APU is what I'm still looking for from AMD. To me, that seems like the point where an APU becomes truly compelling as opposed to CPU + discreet GPU.
  • P39Airacobra - Thursday, January 1, 2015 - link

    I still can't understand why anyone would be insane enough to pay the outrages high price for a AMD APU simply because it has a built in GPU that can play some games! When for the same price you can get a high end i5 CPU and mid range GPU for a few dollars more! Or for the exact same price you can get a AMD quad and a mid range GPU. Either choice would bloaw any AMD APU out of the water! Yes you can crossfire the APU, But you can also crossfire and SLI regular GPU's. Besides by the time you paid the money for a AMD APU and a GPU to crossfire with it you could have got a nice i3 or FX 6300 or even a last gen IVY i5 with a GPU like a R9 270 or a GTX 660. And either one of those would blow away a APU/Crossfire setup! What are you people thinking? I swear people today would not only buy the Brooklyn bridge once but more than once!
  • P39Airacobra - Thursday, January 1, 2015 - link

    Most logical thing to do is buy FX-6300 for $119 and a Motherboard for $60 and then buy a GTX 660 or a R9 270 and buy a 1600x900 resolution monitor and then you will be able to max out anything.
  • P39Airacobra - Thursday, January 1, 2015 - link

    Besides 60fps on Medium/High at only 1280x1024 is a laugh! A GT 740 and a R7 250 can do better than that!
  • kzac - Monday, February 16, 2015 - link

    After living with the processor on a gigabyte main board for the past several months, I can honestly say its bested both the core i3 and i5 systems (some quad core) I have used in the past. What it might not score on benchmarks for all out throughput it makes up for in its multitasking capability. What normally crashes my i3 and makes my i5 struggle while multitasking (many things open and operating at the same time), doesn’t tend to effect the A10 APU. The core i3 i am using is the later 3220 chip which completely chokes with anything above average multitasking under W7pro, even though it has 12 gig of 1600 ram. The core i5 was better at multitasking than the core i3 but still not near as effective at multitasking as the AMD A10 7850. Where I cannot speak to the performance of the AMD A10 for gaming, for multitasking is very effective.
    For gaming I have used the FX series AMD processors, both Quad Core and 8 core.

Log in

Don't have an account? Sign up now