Introduction and Piledriver Overview

Brazos and Llano were both immensely successful parts for AMD. The company sold tons despite not delivering leading x86 performance. The success of these two APUs gave AMD a lot of internal confidence that it was possible to build something that didn't prioritize x86 performance but rather delivered a good balance of CPU and GPU performance.

AMD's commitment to the world was that we'd see annual updates to all of its product lines. Llano debuted last June, and today AMD gives us its successor: Trinity.

At a high level, Trinity combines 2-4 Piledriver x86 cores (1-2 Piledriver modules) with up to 384 VLIW4 Northern Islands generation Radeon cores on a single 32nm SOI die. The result is a 1.303B transistor chip (up from 1.178B in Llano) that measures 246mm^2 (compared to 228mm^2 in Llano).

Trinity Physical Comparison
  Manufacturing Process Die Size Transistor Count
AMD Llano 32nm 228mm2 1.178B
AMD Trinity 32nm 246mm2 1.303B
Intel Sandy Bridge (4C) 32nm 216mm2 1.16B
Intel Ivy Bridge (4C) 22nm 160mm2 1.4B

Without a change in manufacturing process, AMD is faced with the tough job of increasing performance without ballooning die size. Die size has only gone up by around 7%, but both CPU and GPU performance see double-digit increases over Llano. Power consumption is also improved over Llano, making Trinity a win across the board for AMD compared to its predecessor. If you liked Llano, you'll love Trinity.

The problem is what happens when you step outside of AMD's world. Llano had a difficult time competing with Sandy Bridge outside of GPU workloads. AMD's hope with Trinity is that its hardware improvements combined with more available OpenCL accelerated software will improve its standing vs. Ivy Bridge.

Piledriver: Bulldozer Tuned

While Llano featured as many as four 32nm x86 Stars cores, Trinity features up to two Piledriver modules. Given the not-so-great reception of Bulldozer late last year, we were worried about how a Bulldozer derivative would stack up in Trinity. I'm happy to say that Piledriver is a step forward from the CPU cores used in Llano, largely thanks to a bunch of clean up work from the Bulldozer foundation.

Piledriver picks up where Bulldozer left off. Its fundamental architecture remains completely unchanged, but rather improved in all areas. Piledriver is very much a second pass on the Bulldozer architecture, tidying everything up, capitalizing on low hanging fruit and significantly improving power efficiency. If you were hoping for an architectural reset with Piledriver, you will be disappointed. AMD is committed to Bulldozer and that's quite obvious if you look at Piledriver's high level block diagram:

Each Piledriver module is the same 2+1 INT/FP combination that we saw in Bulldozer. You get two integer cores, each with their own schedulers, L1 data caches, and execution units. Between the two is a shared floating point core that can handle instructions from one of two threads at a time. The single FP core shares the data caches of the dual integer cores.

Each module appears to the OS as two cores, however you don't have as many resources as you would from two traditional AMD cores. This table from our Bulldozer review highlights part of problem when looking at the front end:

Front End Comparison
  AMD Phenom II AMD FX Intel Core i7
Instruction Decode Width 3-wide 4-wide 4-wide
Single Core Peak Decode Rate 3 instructions 4 instructions 4 instructions
Dual Core Peak Decode Rate 6 instructions 4 instructions 8 instructions
Quad Core Peak Decode Rate 12 instructions 8 instructions 16 instructions
Six/Eight Core Peak Decode Rate 18 instructions (6C) 16 instructions 24 instructions (6C)

It's rare that you get anywhere near peak hardware utilization, so don't be too put off by these deltas, but it is a tradeoff that AMD made throughout Bulldozer. In general, AMD opted for better utilization of fewer resources (partially through increasing some data structures and other elements that feed execution units) vs. simply throwing more transistors at the problem. AMD also opted to reduce the ratio of integer to FP resources within the x86 portion of its architecture, clearly to support a move to the APU world where the GPU can be a provider of a significant amount of FP support. Piledriver doesn't fundamentally change any of these balances. The pipeline depth remains unchanged, as does the focus on pursuing higher frequencies.

Fundamental to Piledriver is a significant switch in the type of flip-flops used throughout the design. Flip-flops, or flops as they are commonly called, are simple pieces of logic that store some form of data or state. In a microprocessor they can be found in many places, including the start and end of a pipeline stage. Work is done prior to a flop and committed at the flop or array of flops. The output of these flops becomes the input to the next array of logic. Normally flops are hard edge elements—data is latched at the rising edge of the clock.

In very high frequency designs however, there can be a considerable amount of variability or jitter in the clock. You either have to spend a lot of time ensuring that your design can account for this jitter, or you can incorporate logic that's more tolerant of jitter. The former requires more effort, while the latter burns more power. Bulldozer opted for the latter.

In order to get Bulldozer to market as quickly as possible, after far too many delays, AMD opted to use soft edge flops quite often in the design. Soft edge flops are the opposite of their harder counterparts; they are designed to allow the clock signal to spill over the clock edge while still functioning. Piledriver on the other hand was the result of a systematic effort to swap in smaller, hard edge flops where there was timing margin in the design. The result is a tangible reduction in power consumption. Across the board there's a 10% reduction in dynamic power consumption compared to Bulldozer, and some workloads are apparently even pushing a 20% reduction in active power. Given Piledriver's role in Trinity, as a mostly mobile-focused product, this power reduction was well worth the effort.

At the front end, AMD put in additional work to improve IPC. The schedulers are now more aggressive about freeing up tokens. Similar to the soft vs. hard flip flop debate, it's always easier to be conservative when you retire an instruction from a queue. It eases verification as you don't have to be as concerned about conditions where you might accidentally overwrite an instruction too early. With the major effort of getting a brand new architecture off of the ground behind them, Piledriver's engineers could focus on greater refinement in the schedulers. The structures didn't get any bigger; AMD just now makes better use of them.

The execution units are also a bit beefier in Piledriver, but not by much. AMD claims significant improvements in floating point and integer divides, calls and returns. For client workloads these gains show minimal (sub 1%) improvements.

Prefetching and branch prediction are both significantly improved with Piledriver. Bulldozer did a simple sequential prefetch, while Piledriver can prefetch variable lengths of data and across page boundaries in the L1 (mainly a server workload benefit). In Bulldozer, if prefetched data wasn't used (incorrectly prefetched) it would clog up the cache as it would come in as the most recently accessed data. However if prefetched data isn't immediately used, it's likely it will never be used. Piledriver now immediately tags unused prefetched data as least-recently-used, allowing the cache controller to quickly evict it if the prefetch was incorrect.

Another change is that Piledriver includes a perceptron branch predictor that supplements the primary branch predictor in Bulldozer. The perceptron algorithm is a history based predictor that's better suited for predicting certain branches. It works in parallel with the old predictor and simply tags branches that it is known to be good at predicting. If the old predictor and the perceptron predictor disagree on a tagged branch, the perceptron's path is taken. Improving branch prediction accuracy is a challenge, but it's necessary in highly pipelined designs. These sorts of secondary predictors are a must as there's no one-size-fits-all when it comes to branch prediction.

Finally, Piledriver also adds new instructions to better align its ISA with Haswell: FMA3 and F16C.

Improved Turbo, Beefy Interconnects and the Trinity GPU
Comments Locked

271 Comments

View All Comments

  • Taft12 - Tuesday, May 15, 2012 - link

    He said "better".

    http://ir.amd.com/phoenix.zhtml?c=74093&p=irol...

    "Linux OS supports manual switching which requires restart of X-Server to switch between graphics solutions."

    They ain't there yet!
  • JarredWalton - Tuesday, May 15, 2012 - link

    Enduro sounds like it's just a renamed "AMD Dynamic Switchable Graphics" solution. I haven't had a chance to test it yet, unfortunately, but I can say that the previous solution is still very weak. And you still don't get separate driver updates from AMD and Intel.
  • Spunjji - Wednesday, May 16, 2012 - link

    Drivers is the big deal here. I like that I get standard drivers using my Optimus laptop.

    What I don't like is that it f#@!s up Aero constantly and occasionally performs other bizarre, unpredictable manoeuvres.
  • ToTTenTranz - Tuesday, May 15, 2012 - link

    Greetings,

    Is it possible to provide some battery life results with gaming?

    It's true that an Intel+nVidia Optimus solution should be better for both plugged-in gaming and wireless productivity (more expensive too, but that's been covered in the review).
    However, a 35W Trinity should consume quite a bit less power than a 35W Intel CPU + 35W nVidia GPU, so it might be a worthy tradeoff for some.

    Furthermore, when are we to expect Hybrid Crossfire results with Trinity+Turks? Is there any laptop OEM with that on the roadmap?
    That should give us a better comparison to Ivy Bridge + GK107 solutions, as it would provide better gaming performance at a rather small price premium ($50 the most?).
  • x264fan - Tuesday, May 15, 2012 - link

    thanx for the nice review author, but let me write you some very important information regarding your test.

    1. x264 HD Benchmark Ver. 4.0 you used is using quite old x264.exe for encoding. It is important for Bulldozer/Piledriver to replace it with the newer once which contain specific assembler optimisation, which gives nice performance boost for AMD processor by using new instructions introduced in those CPUs. You can find how many they are here:
    http://git.videolan.org/gitweb.cgi?p=x264.git;a=sh...

    I would suggest to download new x264 build from x264.nl and replace it, then run the benchmark again. It would also show you how beneficial new isntructions are.

    Another suggestion would be to run this benchmark using x64 build of the x264 throught x86 avisynth wrapper avs4x264mod.exe In this way you can see how much difference x64 uinstructions give.

    iN FACT X264 IS SO NICELLY OPTIMISED IT CAN BE USED FOR CPU TESTING.

    2. You have used Media Player Classic Home Cinema Edition for measuring playback of h264 streams and battery life. So am I, unfortunatelly every time I want to use it with DXVA acceleration on my i7-2630 laptop I end up with terrible artefacts on smaller bitrate content. Blocks are floating and destroying picture quality. It is not as much visible on Blu-Ray content where the picture is more recommpressed than recreated using x264 transformations, but it is still there. My point is that if the INTEL decoding/drivers are so buggy which makes this dxva mode so unusable, how can anyone would like to measure battery life with this mode?
    Without DXVA intel numbers would not be so good, but so far this mode is only usable.

    3. I must say i am amased how good hd4000 is, but what about picture quality. From time to time we see the reports that nvidia or amd has cheated in drivers sacrifacing picture quality, so how about intel...

    I hope you read my comment and update your test.
  • JarredWalton - Tuesday, May 15, 2012 - link

    So, help me out here: where do I get the actual x264 executables if I want to run an updated version of the x264 HD test? We've tried to avoid updating to newer releases just so that we could compare results with previously tested CPUs, but perhaps it's time to cut the strings. What I'd like is a single EXE that works optimally for Sandy Bridge, Ivy Bridge, Llano, and Trinity architectures. And I'm not interested in downloading source code, trying to get a compiled version to work, etc. -- I gave up being a software developer over a decade ago and haven't looked back. :-)
  • x264fan - Wednesday, May 16, 2012 - link

    http://x264.nl it is newest semi-official build. It contains all current optimisations for every CPU, but since its command line you can turn on and off them. I also heard that this week there will be new hd benchmark 5.0 which would have the newest build in it.
  • plonk420 - Monday, July 9, 2012 - link

    the problem with this is that then the test isn't strictly "x264 hd benchmark version x.00" ... and would be harder to compare to other runs of the same test.

    if they did this in ADDITION to v4.00 or whatever (and VERY clearly noted the changes), that might be some useful data.
  • jabber - Tuesday, May 15, 2012 - link

    ....how about adding a line/area to the benchmark graphs that stands for "Beyond this point performance is pointless/unnoticeable to the user".

    That way we can truly tell if we can save ourselves a boat load of cash. All out performance is great and all but I don't run benchmarks all day like some here so it's not so important. I just need to know will it do the job.

    Or would that be bad for the sponsors?
  • bji - Tuesday, May 15, 2012 - link

    It is an interesting idea but it would such incredible fodder for fanboys to flame about, and even reasonable people would have a hard time deciding where that line should be drawn.

    I think the answer to your basic question is that, any mobile CPU in the Llano/Trinity/Sandy Bridge/Ivy Bridge lines will be more than sufficient for you or any other user *unless* you have a specific task that you know is highly CPU intensive and requires all of the CPU you can get.

Log in

Don't have an account? Sign up now