Final Words

On average, Trinity's high-end 384-core GPU manages to be around 16% faster than the fastest Llano GPU, while consuming around 7% more power when active. Given that Trinity is built on the same process node at Llano, I'd call that a relatively good step forward for AMD's equivalent of a "tick". From AMD's perspective, the fact that it can continue to deliver a tangible GPU performance advantage over Intel's latest and greatest even with its die harvested APU (256-core Trinity) is good news. For anyone looking to build a good entry level gaming PC, the Trinity platform easily delivers the best processor graphics performance on the market today. If you're able to spend an extra $100 on a discrete GPU you'll get better performance, but below that Trinity rules. The trick, as always, will be selling the GPU performance advantage alongside the presumably lower x86 CPU performance. We'll have to wait another week to find out the full story on that of course, but if you're mostly concerned about GPU gaming performance, Trinity delivers.

Ivy Bridge was a good step forward for Intel, the problem is that only the high-end Ivy Bridge graphics configuration borders on acceptable. The HD 2500's performance is really bad unfortunately. It's easy to appreciate how far Intel has come when we look at improvements from one generation to the next, but when you start running benchmarks on Trinity it really compresses the progress Intel has made. When Haswell shows up it may be a different game entirely, but until then if you're interested in a platform with processor graphics (with an emphasis on the graphics part), Trinity is as good as it gets.

Power Consumption
Comments Locked

139 Comments

View All Comments

  • kyuu - Friday, September 28, 2012 - link

    "What I'm most looking forward to is a tablet of Surface quality with a low-voltage Trinity powering it."

    I should have said Trinity or, even better, one of its successors.
  • calzahe - Friday, September 28, 2012 - link

    Memory is quite cheap now, you can find good DDR3 2133MHz 4GB 2x2GB modules for around 40usd for current 2 channel memory APUs, so you'll need to add just 40usd for another 4GB 2x2GB modules for the 4 channel memory APUs, but if done properly these APUs will be able to use 8GB of Memory. It means that for extra 40USD the new APU would be able to use 8GB of memory what is much more than 3-4GB in current monster video cards which cost 500-600usd. Also for around 100-150usd you can get DDR3 2133MHz 16GB 4x4GB.

    Can you imagine the level of next-gen graphics if APUs will be able to fully utilise 8GB, 16GB or even 32GB of 4 channel system memory!!!
  • Marburg U - Thursday, September 27, 2012 - link

    So, Anand, you've just called this a "Review".

    Yes, you named it "part 1", but the fact is that at the moment you are publishing a review with only what AMD HAS TOLD YOU you are allowed to publish and which they are pleased to read.

    How the hell can i trust this site's reviews anymore?
  • silverblue - Thursday, September 27, 2012 - link

    You could always go to TechReport and join in the AMD bashing if you prefer. Whilst I don't completely agree with the idea of partially lifting the NDA in a specific fashion, it's clear that AMD wants to highlight the strengths of Trinity without possibly clouding the waters with middling x86 performance.

    Piledriver is not AMD's answer to Intel, even Vishera won't be an i7 competitor in most things and might struggle to stay with the i5s sometimes, and Zambezi was definitely underwhelming as a whole, so I can understand why they wouldn't want to focus on CPU performance. Additionally, if Vishera is due out at the same time as Trinity and you get an early idea of Trinity's CPU performance, even though Vishera will be generally faster than Trinity it may be classed at the same performance level.
  • cmdrdredd - Thursday, September 27, 2012 - link

    What's clear is AMD cannot compete in benchmarks that matter to most people who read these sites(how fast does it transcode my video vs an i5). So they try to hide that behind GPU performance charts.

    It's like Apple misleading people about the performance of their CPUs back in the day.
  • silverblue - Thursday, September 27, 2012 - link

    Amusingly, you'd think it would easily beat an i5 at transcoding... :P
  • Taft12 - Sunday, September 30, 2012 - link

    Uhh the benchmarks the readers of this site care about are the ones that ARE here - the gaming benchmarks. AT readers are intelligent enough to know CPUmark, Sandra, etc mean less than nothing.
  • torp - Thursday, September 27, 2012 - link

    The A10 65W looks like it has the same GPU and about 10% less CPU clock. Now THAT part could be really interesting for a low cost PC...
  • rarson - Thursday, September 27, 2012 - link

    Crossfire? Pairing one of these with a mid-range card in a hybrid Crossfire setup would be pretty awesome in an HTPC setup. Almost like a next-gen console, but much better.
  • RU482 - Thursday, September 27, 2012 - link

    looking to upgrade a couple of lower power SSF systems with one of those 65W CPUs. wonder how much an ITX mobo will run

Log in

Don't have an account? Sign up now