Kaveri and Counting Cores

With the move towards highly integrated SoCs we've seen a variety of approaches to core counts. Apple, Intel and Qualcomm still count CPU cores when advertising an SoC. For Apple and Qualcomm that's partially because neither company is particularly fond of disclosing the configuration of their GPUs. More recently, NVIDIA took the somewhat insane stance of counting GPU CUDA cores on its Tegra K1 SoC. Motorola on the other hand opted for the bizarre choice of aggregating CPU, GPU and off-die companion processors with the X8 platform in its Moto X smartphone. Eventually we will have to find a way to characterize these highly integrated SoCs, particularly when the majority of applications actually depend on/leverage both CPU and GPU cores.

AMD finds itself in a unique position with Kaveri where it has a truly unified CPU/GPU architecture and needs to establish a new nomenclature for use in the future. With 47% of the Kaveri die dedicated for GPU use, and an architecture that treats both CPU and GPU as equals, I can understand AMD's desire to talk about the number of total cores on the APU.

AMD settled on the term "Compute Core", which can refer to either an x86 (or maybe eventually ARM) CPU core or a GCN compute unit. The breakdown is as follows:

  • Each thread on a CPU is a Compute Core
  • Each Compute Unit on the IGP is a Compute Core
  • Total Compute Cores = CPU Compute Cores + IGP Compute Cores

This means that the high end SKU, the A10-7850K will have a total of 12 compute cores: four from the CPU (two Steamroller modules supporting four threads) and eight from the IGP (due to eight compute units from the R7 graphics).

There are some qualifications to be made on this front. Technically, AMD is correct – each compute unit in the IGP and each thread on the CPU can run separate code. The Hawaii GCN architecture can spawn as many kernels as compute units, whereas a couple of generations ago we were restricted to one compute kernel on the GPU at once (merely with blocks of work being split across the CUs). However, clearly these 12 compute units are not equivalent: a programmer will still have to write code for the CPU and GPU specifically in order to use all the processing power available.

Whenever AMD (or partners) are to promote the new APUs, AMD tells us clearly that two sets of numbers should be quoted in reference to the Compute Cores – the total, and the breakdown of CPU/GPU on the APU. Thus this would mean that the A10-7850K APU would be marketed at a “12 Compute Core” device, with “(4 CPU + 8 GPU)” following immediately after. I applaud AMD's decision to not obfuscate the internal configuration of its APUs. This approach seems to be the most sensible if it wants to tout the total processing power of the APU as well as tell those users who understand a bit more what the actual configuration of the SoC is. The biggest issue is how to address the users who automatically assume that more cores == better. The root of this problem is very similar to the old PR-rating debates of the Athlon XP. Explaining to end users the intracacies of CPU/GPU programming is really no different than explaining why IPC * frequency matters more than absolute frequency.

When a programmer obtains an APU, the OpenCL profiler should locate the eight compute units of the GPU and display that to the user for offloading compute purposes; at the same time it is up to the programmer to leverage the threads as appropriately as possible, even with AMD’s 3rd generation Bulldozer modules implementing a dual-INT + single-FP solution.

At launch, AMD will offer the following configurations:

  • A10-7850K: 12 Compute Cores (4 CPU + 8 GPU)
  • A10-7700K: 10 Compute Cores (4 CPU + 6 GPU)
  • A8-7600: 10 Compute Cores (4 CPU + 6 GPU)

The problem with distilling the processing power of the APU into a number of compute cores is primarily on the CPU side. AMD will be keeping the GPU at approximately the same clock (720 MHz for these three) but the CPU frequency will differ greatly, especially with the A8-7600 which has configurable TDP and when in 45W mode will operate 300-400 MHz less.

Steamroller Architecture Redux

We've already documented much of what Steamroller brings to the table based on AMD's Hot Chips presentation last year, but I'll offer a quick recap here.

Steamroller brings forward AMD's Bulldozer architecture, largely unchanged. We're still talking about a dual-core module featuring two independent integer execution cores with a single shared floating point execution core capable of executing instructions from two threads in parallel. A single module still appears as two cores/threads to the OS.

In Bulldozer and Piledriver, each integer core had its own independent scheduler but the two cores shared a single fetch and decode unit. Instructions would come in and decodeded operations would be fed to each integer pipe on alternating clock cycles. In Steamroller the decode hardware is duplicated in each module, so now each integer core gets its own decode unit. The two decode units are shared by the one FP unit.

L1 instruction caches increase in size from 64KB to 96KB per module, which AMD claims reduces misses by up to 30%. There's an updated branch predictor which is responsible for reducing mispredicted branches by up to 20%. Both integer and FP register files grow in size as well as an increase in the size of the scheduling window, a combination of which increases the number of dispatches per thread by up to 25%.

There are huge improvements on the store side. Steamroller can now issue up to 2 stores at the same time compared to 1 in Bulldozer/Piledriver. The load/store queue sizes go up by around 20% as well. It's remarkable just how much low hanging fruit there was in the Bulldozer design.

GCN in an APU

The integrated graphics solution on Trinity/Richland launched with a Cayman-derived VLIW4 architecture, which unfortunately came just shortly after desktop side of the equation finished moving from VLIW5/VLIW4 to GCN. Having a product stack with largely different GPU architectures doesn't help anyone, particularly on the developer front. Looking forward, sticking with GCN was the appropriate thing to do, as now Kaveri is using GCN, the same architecture found in AMD’s high end R9-290X GPU, based on the Hawaii platform.

This enabled AMD to add in all the features they currently have on Hawaii with little to no effort – the TrueAudio DSP, the upgraded Video Coding Engine and Unified Video Decoder are such examples. Whether or not AMD decides to develop an APU with more than 8 GCN CUs is another matter. This is a point we've brought up with AMD internally and one that I'm curious about - do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community.

Kaveri: Aiming for 1080p30 and Compute Llano, Trinity and Kaveri Die: Compared
Comments Locked

380 Comments

View All Comments

  • eanazag - Wednesday, January 15, 2014 - link

    In reference to the no FX versions, I don't think that will change. I think we are stuck with it indefinitely. From the AMD server roadmap and info in this article related to process, I believe that the Warsaw procs will be a die shrink to 12/16 because the GF 28nm process doesn't help clocks. The current clocks on the 12/16 procs already suck so they might stay the same or better because of the TDP reduction at that core count, but it doesn't benefit in the 8 core or less pile driver series. Since AMD has needed to drive CPU clock way higher to compensate for a lack of IPC and the 28 nm process hurts clocks, I am expecting to not see anything for FX at all. Only thing that could change that is if a process at other than GF would make a good fit for a die shrink. I still doubt they will be doing any more changes to the FX series at the high end.

    So to me, this might force me to consider only Intel for my next build because I am still running discrete GPUs in desktop and I want at least 8 core (AMD equivalent in Intel) performance CPUs in my main system. I will likely go with a #2 Haswell chip. I am not crazy about paying $300 for a CPU, but $200-300 is okay.

    I would not be surprised to see an FX system with 2P like the original FX. The server roadmap is showing that. This would essentially be two Kaveri's and maybe crossfire between the two procs. That sounds slightly interesting if I could ratchet up the TDP for the CPU. It does sound like a Bitcoin beast.
  • britjh22 - Wednesday, January 15, 2014 - link

    I think there are some interesting points to be made about Kaveri, but I think the benchmarks really fall short of pointing to some possibly interesting data. Some of the things I got from this:

    1. The 7850k is too expensive for the performance it currently offers (no proliferation of HSA), and the people comparing it to cheaper CPU/dGPU are correct. However to say Kaveri fails based on that particular price comparison is a failure to see what else is here, and the article does point that out somewhat.

    2. The 45W part does seem to be the best spot at the moment for price to performance, possibly indicating that more iGPU resources don't give up much benefit without onboard cache like crystalwell/Iris Pro. However, putting the 4770R in amongst the benches is no super useful due to the price and lack of availability, not to mention it not being socketed.

    3. The gaming benchmarks may be the standard for AT, but they really don't do an effective job to either prove or disprove AMD's claims for gaming performance. Plenty of people will (and have looking at the comments) say they have failed at 1080p gaming scores based on 1080p extreme settings. Even some casual experimentation to see what is actually achievable at 1080p would be helpful and informative.

    4. I think the main target for these systems isn't really being addressed by the review, which may be difficult to do in a score/objective way, but I think it would be useful. I think of systems like this, and more based off the 65W/45W parts as great mainstream parts. For that price ($100-130ish) you would be looking at an i3 with iGP, or a lower feature pentium part with a low end dGPU. I think at this level you get a lot more from your money with AMD. You have a system which one aspect will not become inadequate before the other (CPU vs GPU), how many relatives do we know where they have an older computer with enough CPU grunt, but not enough GPU grunt. I've seen quite a few where the Intel integrated was just good enough at the time of launch, but a few years down the road would need a dGPU or more major system upgrade. A system with the A8-7600 would be well rounded for a long time, and down the road could add a mid grade dGPU for good gaming performance. I believe it was an article on here that recently showed even just an A8 was quite sufficient for high detail 1080p when paired with a mid to high range card.

    5. As was referenced another review and in the comments, a large chunk of steam users are currently being served by iGPU's which are worse then this. These are the people who play MMO's, free to play games, source games, gMod games, DOTA2/LoL, indie games, and things like Hearthstone. For them, and most users that these should be aimed at, the A10-7850K (at current pricing) is not a winner, and they would probably be better (value) or equally (performance) served by the A8-7600. This is a problem with review sites, including AT, which tend to really look at the high end of the market. This is because the readership (myself included) is interested for personal decision making, and the manufacturer's provide these products as, performance wise, they are the most flattering. However, I think some of the most interesting and prolific advances are happening in the middle market. The review does a good job of pointing that out with the performance charts at 45W, however I think some exploration into what was mentioned in point #3 would really help to flesh this out. Anand's evaluation for CPU advances slowing down in his Mac Pro is a great example of this, and really points out how HSA could be a major advancement. I upgraded from a Q6600 to a 3570K, and don't see any reasons coming up to make a change any time soon, CPU's have really become somewhat stagnant at the high end of performance. Hopefully AMD's gains at the 45W level can pan out into some great APU's in laptops for AMD, for all the users for games like the above mentioned.
  • fteoath64 - Sunday, January 19, 2014 - link

    As consumers, our problem with the prices inching upwards in the mid-range is that Intel is not supplying enough models of the i3 range within the price point of AMD APU (mid to highest models). This means the prices are well segmented in the market such that they will not change giving excuse for slight increases as we have seen with Richland parts. It seems like lack of competition in the segment ranges indicate a cartel like behaviour in the x86 market.
    AMD is providing the best deal in a per transistor basis while consumers expects their cpu performance to ran on par with Intel. That is not going to happen as Intel's gpu inprovement inches closer to AMD. With HSA, the tables have turned for AMD and Intel with Nvidia certain will have to respond some time in the future. This is come when the software changes for HSA makes a significant improvement in overall performance for AMD APUs. We shall see but I am hopeful.
  • woogitboogity - Wednesday, January 15, 2014 - link

    Ah AMD... to think that in the day of thunderbird they were once the under-appreciated underdog where the performance was. The rebel against the P4 and it's unbelievably impractical pipeline architecture.

    Bottom line is Intel still needs them as anti-trust suit insurance... with this SoC finally getting off the ground is anyone else wondering whether Intel was less aggressive with their own SoC stuff as a "AMD doggy/gimp treat"? Still nice to able to recommend a processor without worrying about the onboard graphics when they are on chip.
  • Hrel - Wednesday, January 15, 2014 - link

    "do any AnandTech readers have an interest in an even higher end APU with substantially more graphics horsepower? Memory bandwidth obviously becomes an issue, but the real question is how valuable an Xbox One/PS4-like APU would be to the community."

    I think as a low end Steam Box that'd be GREAT! I'm not sure the approach Valve is looking to take with steam boxes, but if there's no "build your own" option then it doesn't make sense to sell it to us. Makes a lot more sense for them to do that and just sell the entire "console" directly to consumers. Or, through a reseller, but then I become concerned with additional markup from middlemen.
  • tanishalfelven - Wednesday, January 15, 2014 - link

    You can install steamos on whatever computer you want... even one you built your self or one you already own. I'd personally think a pc based on something like this processor would be significantly less expensive (i can imagine 300 bucks) and maybe even faster. And more importantly with things like humble bundle it'd be much much cheaper in the games department...
  • tanishalfelven - Wednesday, January 15, 2014 - link

    i am wrong on faster than ps4 however, point stands
  • JBVertexx - Wednesday, January 15, 2014 - link

    As always, very good writeup, although I must confess that it took me a few attempts to get thru the HSA feel dive! Still, it was a much needed education, so I appreciate that.

    I have had to digest this, as I was initially really dissappointed at the lack of progress on the CPU front, but after reading through all the writeups I could find, I thinks the real story here is about the A8-7600 and opening up new markets for advanced PC based gaming.

    If you think about it, that is where the incentive is for game developers to develop for Mantle. Providing the capability for someone who already has or would purchase an advanced discrete GPU to play with equal performance on an APU provides zero economic incentive for game developers.

    However, if AMD can successfully open up as advanced gaming to the mass, low cost PC market, even if that performance is substandard by "enthudiast" standards, then that does provide huge economic incentive for developers, because the cost of entry to play your game has just gone down significantly, potentially opening up a vast new customer base.

    With Steam really picking up "steam", with the consoles on PC tech, and with the innovative thinking going on at AMD, I have come around to thinking this is all really good stuff for PC gaming. And it's really the only path to adoption that AMD can take. I for one am hoping they're successful.
  • captianpicard - Wednesday, January 15, 2014 - link

    I doubt Kaveri was ever intended for us, the enthusiast community. The people whom Kaveri was intended for are not the type that would read a dozen CPU/GPU reviews and then log on to newegg to price out an optimal FPS/$ rig. Instead, they would be more inclined to buy reasonably priced prebuilt PCs with the hope that they'd be able to do some light gaming in addition to the primary tasks of web browsing, checking email, watching videos on youtube/netflix, running office, etc.

    Nothing really up till now has actually fulfilled that niche, and done it well, IMO. Lots of machines from dell, HP, etc. have vast CPU power but horrendous GPU performance. Kaveri offers a balanced solution at an affordable price, in a small footprint. So you could put it into a laptop or a smart tv or all in one pc and be able to get decent gaming performance. Relatively speaking, of course.
  • izmanq - Wednesday, January 15, 2014 - link

    why put i7 4770 with discrete HD 6750 in the integrated GPU performance charts ? :|

Log in

Don't have an account? Sign up now