After years of waiting, AMD finally unveiled its Llano APU platform fifteen months ago. The APU promise was a new world where CPUs and GPUs would live in harmony on a single, monolithic die. Delivering the best of two very different computing architectures would hopefully pave the way for a completely new class of applications. That future is still distant, but today we're at least at the point where you can pretty much take for granted that if you buy a modern CPU it's going to ship with a GPU attached to it.

Four months ago AMD took the wraps off of its new Trinity APU: a 32nm SoC with up to four Piledriver cores and a Cayman based GPU. Given AMD's new mobile-first focus, Trinity launched as a notebook platform. The desktop PC market is far from dead, just deprioritized. Today we have the first half of the Trinity desktop launch. Widespread APU availability won't be until next month, but AMD gave us the green light to begin sharing some details including GPU performance starting today.


AMD's Trinity APU, 2 Piledriver modules (4 cores)

We've already gone over the Trinity APU architecture in our notebook post earlier this year. As a recap, Piledriver helped get Bulldozer's power consumption under control, while the Cayman GPU's VLIW4 architecture improved efficiency on the graphics side. Compared to Llano this is a fairly big departure with fairly different CPU and GPU architectures. Given that we're still talking about the same 32nm process node, there's not a huge amount of room for performance improvements without ballooning die area but through architecture changes and some more transistors AMD was able to deliver something distinctly faster.

Trinity Physical Comparison
  Manufacturing Process Die Size Transistor Count
AMD Llano 32nm 228mm2 1.178B
AMD Trinity 32nm 246mm2 1.303B
Intel Sandy Bridge (4C) 32nm 216mm2 1.16B
Intel Ivy Bridge (4C) 22nm 160mm2 1.4B

On the desktop Trinity gets the benefit of much higher TDPs and thus higher clock speeds. The full lineup, sans pricing, is below:

Remember the CPU cores we're counting here are integer cores, FP resources are shared between every two cores. Clock speeds are obviously higher compared to Llano, but Bulldozer/Piledriver did see some IPC regression compared to the earlier core design. You'll notice a decrease in GPU cores compared to Llano as well (384 vs. 400 for the top end part), but core efficiency should be much higher in Trinity.

Again AMD isn't talking pricing today, other than to say that it expects Trinity APUs to be priced similarly to Intel's Core i3 parts. Looking at Intel's price list that gives AMD a range of up to $134. We'll find out more on October 2nd, but for now the specs will have to be enough.

Socket-FM2 & A85X Chipset

The desktop Trinity APUs plug into a new socket: FM2. To reassure early adopters of Llano's Socket-FM1 that they won't get burned again, AMD is committing to one more generation beyond Trinity for the FM2 platform.

The FM2 socket itself is very similar to FM1, but keyed differently so there's no danger of embarrassingly plugging a Llano into your new FM2 motherboard.


Socket-FM1 (left) vs. Socket-FM2 (Right)

AMD both borrows from Llano as well as expands when it comes to FM2 chipset support. The A55 and A75 chipsets make another appearance here on new FM2 motherboards, but they're joined by a new high-end option: the A85X chipset.

The big differentiators are the number of 6Gbps SATA and USB 3.0 ports. On the A85X you also get the ability to support two discrete AMD GPUs in CrossFire although obviously there's a fairly competent GPU on the Trinity APU die itself.

The Terms of Engagement

As I mentioned earlier, AMD is letting us go live with some Trinity data earlier than its official launch. The only stipulation? Today's preview can only focus on GPU performance. We can't talk about pricing, overclocking and aren't allowed to show any x86 CPU performance either. Obviously x86 CPU performance hasn't been a major focus of AMD's as of late, it's understandable that AMD would want to put its best foot forward for these early previews. Internally AMD is also concerned that that any advantages it may have in the GPU department are overshadowed by their x86 story. AMD's recent re-hire of Jim Keller was designed to help address the company's long-term CPU roadmap, however until then AMD is still in the difficult position of trying to sell a great GPU attached to a bunch of CPU cores that don't land at the top of the x86 performance charts.

It's a bold move by AMD, to tie a partial NDA to only representing certain results. We've seen embargoes like this in the past, allowing only a subset of tests to be used in a preview. AMD had no influence on what specifics benchmarks we chose, just that we limit the first part of our review to looking at the GPU alone. Honestly with some of the other stuff we're working on I don't mind so much as I wouldn't be able to have a full review ready for you today anyway. Our hands are tied, so what we've got here is the first part of a two part look at the desktop Trinity APU. If you want to get some idea of Trinity CPU performance feel free to check out our review of the notebook APU. You won't get a perfect idea of how Piledriver does against Ivy Bridge on the desktop, but you'll have some clue. From my perspective, Piledriver seemed more about getting power under control - Steamroller on the other hand appears to address more on the performance side.

We'll get to the rest of the story on October 2nd, but until then we're left with the not insignificant task of analyzing the performance of the graphics side of AMD's Trinity APU on the desktop.

The Motherboard

AMD sent over a Gigabyte GA-F2A85X-UP4 motherboard along with an A10-5800K and A8-5600K. The board worked flawlessly in our testing, and it also gave us access to AMD's new memory profiles. A while ago AMD partnered up with Patriot to bring AMD branded memory to market. AMD's Performance line of memory includes support for AMD's memory profiles, which lets you automatically set frequency, voltage and timings with a single BIOS setting.

We've always done these processor graphics performance comparisons using DDR3-1866, so there's no difference for this review. The only change is we only had to set a single option to configure the platform for stable 1866MHz operation.

Processor graphics performance scales really well with additional memory bandwidth, making this an obvious fit. There's nothing new about memory profiles, this is just something new for AMD's APU platform.

Crysis: Warhead & Metro 2033 Performance
Comments Locked

139 Comments

View All Comments

  • dishayu - Thursday, September 27, 2012 - link

    Hate to be offtopic here, i wanted to ask what happened to this weeks Podcast? Was really looking forward to a talk about IDF and Haswell.
  • Ryan Smith - Thursday, September 27, 2012 - link

    Busy,. Busy busy busy. Perhaps on the next podcast Anand will tell you what he's been up to and how many times he's flown somewhere this month.
  • idealego - Thursday, September 27, 2012 - link

    I don't think the load GPU power consumption is fair and will explain why.

    The AMD processors are achieving higher frame rates than the Intel processors in Metro 2033, the game used for the power consumption chart. If you calculated watts per frame AMD would actually be more efficient than Intel.

    Another way of running this test would be to use game settings that all the processors could handle at 30 fps and then cap all tests at 30 fps. Under these test conditions each processor would be doing the same amount of work. I would be curious to see the results of such a test.

    Good article as always!
  • SleepyFE - Thursday, September 27, 2012 - link

    True.
    But you are asking for consumption/performance charts. You can do those yourself out of the data given.
    They test consumption under max load because noone will cap all their games at 30fps to keep consumption down. People use what they get and that is what you would get if you played Metro 2033.
  • idealego - Thursday, September 27, 2012 - link

    Some people want to know the max power usage of the processor to help them select a power supply or help them predict how much cooling will be needed in their case.

    Other people, like me, are more interested in the efficiency of the architecture of the processor in general and as a comparison to the competition. This is why I'm more interested in a frames per watt or watts at a set fps, otherwise it's like comparing the "efficiency" of a dump truck to a van by comparing only fuel economy.
  • CeriseCogburn - Thursday, October 11, 2012 - link

    LMAO - faildozer now a dump truck, sounds like amd is a landfill of waste and garbage, does piledriver set the posts for the hazardous waste of PC purchase money signage ?

    Since it's great doing 30fps in low low mode so everyone can play and be orange orange instead of amd losing terribly sucking down the power station, just buy the awesome Intel Sandy Bridge with it's super efficient arch and under volting and OC capabilities and be happy.

    Or is that like verboten for amd fanboys ?
  • IntelUser2000 - Thursday, September 27, 2012 - link

    We can't even calculate it fairly because they are measuring system power, not CPU power.
  • iwod - Thursday, September 27, 2012 - link

    I think Trinity is pretty good chip for low cost PC. Which seems to be the case for majority of PCs sold today. I wonder why is it now selling well compared to Intel.
  • Hardcore69 - Thursday, September 27, 2012 - link

    I bought a 3870K in February. I've now sold it and replaced it with a G540. APU's are rather pointless unless you are a cheap ass gamer that can't afford a 7870 or above or for a HTPC. Even there, I built a HTPC with a G540. You don't really need more anyway. Match it to a decent Nvidia GPU if you want all the fancy rendering. Personally I don't see the point for MadVR and I can't see the difference between 23.976 @ 23.976 or 23.976 at 50Hz.

    All that being said, I bet that on the CPU side, AMD has failed. Again. CPU grunt is more important anyway. A G620 can compete generally with a 3870K on the CPU side. That is just embarrassing. The 5800K isn't much of an improvement.

    Bottom line, a Celeron is better for a basic office/pornbox, skip the Pentium, skip the i3, get an i5 if you do editing or encoding, i7 if you want to splurge. GPU performance is rather moot for most uses. Intel's HD 1000 does the job. Yes, it can accelerate via Quicksync or DXVA, yes its good enough for youtube. Again, if you want to game, get a gaming GPU. I've given up on AMD. Its CPU tech is too crap and its GPU side can't compensate.
  • Fox5 - Thursday, September 27, 2012 - link

    A 7870 goes for at least $220 right now, that's a pretty big price jump.

    AMD has a market, it's if you want the best possible gaming experience at a minimum in price. You can't really beat the ~$100 price for decent cpu and graphics performance, when it would cost you at least half that much (probably more) for a graphics card of that performance level. Also, in the HTPC crowd, form factor and power usage are critical, so AMD wins there; I don't want a discrete card in my HTPC if I can avoid it.

Log in

Don't have an account? Sign up now