Kaveri and Counting Cores

With the move towards highly integrated SoCs we've seen a variety of approaches to core counts. Apple, Intel and Qualcomm still count CPU cores when advertising an SoC. For Apple and Qualcomm that's partially because neither company is particularly fond of disclosing the configuration of their GPUs. More recently, NVIDIA took the somewhat insane stance of counting GPU CUDA cores on its Tegra K1 SoC. Motorola on the other hand opted for the bizarre choice of aggregating CPU, GPU and off-die companion processors with the X8 platform in its Moto X smartphone. Eventually we will have to find a way to characterize these highly integrated SoCs, particularly when the majority of applications actually depend on/leverage both CPU and GPU cores.

AMD finds itself in a unique position with Kaveri where it has a truly unified CPU/GPU architecture and needs to establish a new nomenclature for use in the future. With 47% of the Kaveri die dedicated for GPU use, and an architecture that treats both CPU and GPU as equals, I can understand AMD's desire to talk about the number of total cores on the APU.

AMD settled on the term "Compute Core", which can refer to either an x86 (or maybe eventually ARM) CPU core or a GCN compute unit. The breakdown is as follows:

  • Each thread on a CPU is a Compute Core
  • Each Compute Unit on the IGP is a Compute Core
  • Total Compute Cores = CPU Compute Cores + IGP Compute Cores

This means that the high end SKU, the A10-7850K will have a total of 12 compute cores: four from the CPU (two Steamroller modules supporting four threads) and eight from the IGP (due to eight compute units from the R7 graphics).

There are some qualifications to be made on this front. Technically, AMD is correct – each compute unit in the IGP and each thread on the CPU can run separate code. The Hawaii GCN architecture can spawn as many kernels as compute units, whereas a couple of generations ago we were restricted to one compute kernel on the GPU at once (merely with blocks of work being split across the CUs). However, clearly these 12 compute units are not equivalent: a programmer will still have to write code for the CPU and GPU specifically in order to use all the processing power available.

Whenever AMD (or partners) are to promote the new APUs, AMD tells us clearly that two sets of numbers should be quoted in reference to the Compute Cores – the total, and the breakdown of CPU/GPU on the APU. Thus this would mean that the A10-7850K APU would be marketed at a “12 Compute Core” device, with “(4 CPU + 8 GPU)” following immediately after. I applaud AMD's decision to not obfuscate the internal configuration of its APUs. This approach seems to be the most sensible if it wants to tout the total processing power of the APU as well as tell those users who understand a bit more what the actual configuration of the SoC is. The biggest issue is how to address the users who automatically assume that more cores == better. The root of this problem is very similar to the old PR-rating debates of the Athlon XP. Explaining to end users the intracacies of CPU/GPU programming is really no different than explaining why IPC * frequency matters more than absolute frequency.

When a programmer obtains an APU, the OpenCL profiler should locate the eight compute units of the GPU and display that to the user for offloading compute purposes; at the same time it is up to the programmer to leverage the threads as appropriately as possible, even with AMD’s 3rd generation Bulldozer modules implementing a dual-INT + single-FP solution.

At launch, AMD will offer the following configurations:

  • A10-7850K: 12 Compute Cores (4 CPU + 8 GPU)
  • A10-7700K: 10 Compute Cores (4 CPU + 6 GPU)
  • A8-7600: 10 Compute Cores (4 CPU + 6 GPU)

The problem with distilling the processing power of the APU into a number of compute cores is primarily on the CPU side. AMD will be keeping the GPU at approximately the same clock (720 MHz for these three) but the CPU frequency will differ greatly, especially with the A8-7600 which has configurable TDP and when in 45W mode will operate 300-400 MHz less.

Steamroller Architecture Redux

We've already documented much of what Steamroller brings to the table based on AMD's Hot Chips presentation last year, but I'll offer a quick recap here.

Steamroller brings forward AMD's Bulldozer architecture, largely unchanged. We're still talking about a dual-core module featuring two independent integer execution cores with a single shared floating point execution core capable of executing instructions from two threads in parallel. A single module still appears as two cores/threads to the OS.

In Bulldozer and Piledriver, each integer core had its own independent scheduler but the two cores shared a single fetch and decode unit. Instructions would come in and decodeded operations would be fed to each integer pipe on alternating clock cycles. In Steamroller the decode hardware is duplicated in each module, so now each integer core gets its own decode unit. The two decode units are shared by the one FP unit.

L1 instruction caches increase in size from 64KB to 96KB per module, which AMD claims reduces misses by up to 30%. There's an updated branch predictor which is responsible for reducing mispredicted branches by up to 20%. Both integer and FP register files grow in size as well as an increase in the size of the scheduling window, a combination of which increases the number of dispatches per thread by up to 25%.

There are huge improvements on the store side. Steamroller can now issue up to 2 stores at the same time compared to 1 in Bulldozer/Piledriver. The load/store queue sizes go up by around 20% as well. It's remarkable just how much low hanging fruit there was in the Bulldozer design.

GCN in an APU

The integrated graphics solution on Trinity/Richland launched with a Cayman-derived VLIW4 architecture, which unfortunately came just shortly after desktop side of the equation finished moving from VLIW5/VLIW4 to GCN. Having a product stack with largely different GPU architectures doesn't help anyone, particularly on the developer front. Looking forward, sticking with GCN was the appropriate thing to do, as now Kaveri is using GCN, the same architecture found in AMD’s high end R9-290X GPU, based on the Hawaii platform.

This enabled AMD to add in all the features they currently have on Hawaii with little to no effort – the TrueAudio DSP, the upgraded Video Coding Engine and Unified Video Decoder are such examples. Whether or not AMD decides to develop an APU with more than 8 GCN CUs is another matter. This is a point we've brought up with AMD internally and one that I'm curious about - do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community.

Kaveri: Aiming for 1080p30 and Compute Llano, Trinity and Kaveri Die: Compared
Comments Locked

380 Comments

View All Comments

  • Fox5 - Wednesday, January 15, 2014 - link

    There's also the issue of the Iris Pro's 128MB edram. At a certain point, it probably is insufficient for the settings and resolution.
  • BSMonitor - Wednesday, January 15, 2014 - link

    Power consumption numbers?? Interesting to see what adding that many transistors (particularly 128 GPU cores did to those)
  • Da W - Wednesday, January 15, 2014 - link

    242 comments so far. Whatever people say, AMD still interests a lot of people and they have a future ahead of them.
  • thomascheng - Wednesday, January 15, 2014 - link

    Lets hope they do well, or we will be stuck with buying $1000 Intel CPUs and Nvidia GPUs.
  • TheJian - Wednesday, January 15, 2014 - link

    "For casual gaming, AMD is hitting the nail square on the head in its quest for 1080p gaming at 30 frames per second, albeit generally at lower quality settings."

    Maybe if they'd said for 1280x1024 gaming your comment would be true. Most of the games have mins at 1080p well below 20, and some even avg below 30. This resolution is NOT playable on these crap chips. Buy a vid card and Intel. Period. I really thought these would be faster, but then the downclock due to process happened. Gains in games where you are STILL unplayable isn't a gain. It is a waste of space to benchmark things you can't actually play at. I would rather have seen a dozen games benched at 1280x1024 and a few you KNOW you could run above 30fps at 1680x1050. 1080p here was pointless. AMD should be derided for even mentioning this res with so many games not even playable at avg fps never mind what happens when you click the MIN button in your charts.

    Discrete clearly has long legs, as anyone building one of these machines with the new APUs will quickly realize they need to buy discrete to enhance their gaming. I really don't think the dire situation here at 1080p will change until 20nm or more, where you may at that point have MORE games that CAN run 1080p ok, vs what you see here where it's just a joke today.

    The games don't even look the same when turning off every feature possible in the game just to hit 30fps. Do you want your games to look like a nintendo 64, or a PC? They should have grown the die a bit for more gpu, so at least 1680x1050 would be pretty good. I don't see AMD making money on cpus for 2yrs :( That means consoles + gpus have to hold the company up and that won't be enough to keep up with R&D in mobile, gpu, cpu. Consoles sold 7mil so far, so at $10-15 per chip ($100 price? per console assuming 15% margin? if they even get that) we're talking about a max of 105mil profits from consoles for the quarter. If they don't keep selling like it's launch month for the next 12 months I see them slowly getting weaker. They owe GF 200mil, so 4 of these console quarters would be ~400mil which is 1/2 blown on GF fines, and the other 200mil goes to interest on their huge debt each year. They need to make some REAL money on cpu/gpu/mobile or this never gets better right? We know cpu is basically out for 2yrs as they say in this article. OUCH. So gpu/mobile has to make something in those two years or this just gets worse and drivers etc will see more phase 1, 2, 3 fixing crap for ages.

    The only impressive thing I saw here was Mantle perf claimed by AMD in BF4. But how many times can you afford $8mil to get this done? I'm sure they amped things up for the first time showcase BF4, but how many times can you even afford $2-4mil for this stuff? And then do you get what the dev said in the AMD APU show (only one BTW), a 20% not being unreasonable for your efforts? Far below $8mil for apparently 45% in BF4 right? Devs will opt for the 2weeks to transfer a game to MOBILE first as NV showed can be done with any openGL game (all ps3, ps4 games, many pc games etc) like Serious Sam3 and Trine2 (most of the time according to anandtech was spent on controls, NOT the porting). Unreal 3 engine ported in 4 (yes...FOUR) days by epic/mozilla and it only took 10 people or so. Dice said 2-3 months on mantle. Devs might do 2 weeks just for more sales to a 1.2B unit market on mobile, they will need PAYMENT to do it for Mantle which gets ZERO extra profits (only makes people happy they bought AMD, no extra cash right?). I really hope ARM takes off on desktops, because we currently have an Intel only race and need someone with CASH to fight them. Bring on K1 (and all it's enemies) and all the games for mobile this will create (ported or new, I don't own a ps3 so I'd buy some of those ported that I can't get now). Since we have no real x86 competition any more we need ARM to pick up AMD's slack.
  • Novaguy - Wednesday, January 15, 2014 - link

    Its going to depend on the settings; other reviewers who did 1080p + low to medium settings demonstrated playable frame rates for the a8 but not the intel igps.
  • mikato - Wednesday, January 15, 2014 - link

    Ian/Ryan - This seems wrong - "For the 100W APUs at 1280x1024, there is almost no movement between the Richland and the Trinity APUs, except for Company of Heroes" under "IGP Gaming, 1280x1024". In this particular graph, it shows an improvement from Trinity to Richland and then not much improvement from there to Kaveri, except for Company of Heroes.
  • Ryan Smith - Wednesday, January 15, 2014 - link

    Noted and fixed. Thank you.
  • tekphnx - Wednesday, January 15, 2014 - link

    Looks like a pretty nice improvement for its intended market, namely the HTPC and casual gaming crowd. Calling the onboard GPU decent at 1080p is a laugh though, as other people have said. For 720p, sure, but not 1080p.

    Prices have shot up from the previous generation, which is unwelcome. And I very much lament the omission of Steamroller from the FX roadmap as an FX owner myself. AMD shouldn't abandon FX... the least they could have done if they're abandoning FX is to include a 6-core Kaveri part at the top end, but it looks like that's not materializing either.
  • zodiacsoulmate - Wednesday, January 15, 2014 - link

    first 4 pages are way better than the last 4 pages :) anyway a great article i read like half an hour

Log in

Don't have an account? Sign up now