• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

Improved Turbo

Trinity features a much improved version of AMD's Turbo Core technology compared to Llano. First and foremost, both CPU and GPU turbo are now supported. In Llano only the CPU cores could turbo up if there was additional TDP headroom available, while the GPU cores ran no higher than their max specified frequency. In Trinity, if the CPU cores aren't using all of their allocated TDP but the GPU is under heavy load, it can exceed its typical max frequency to capitalize on the available TDP. The same obviously works in reverse.

Under the hood, the microcontroller that monitors all power consumption within the APU is much more capable. In Llano, the Turbo Core microcontroller looked at activity on the CPU/GPU and performed a static allocation of power based on this data. In Trinity, AMD implemented a physics based thermal calculation model using fast transforms. The model takes power and translates it into a dynamic temperature calculation. Power is still estimated based on workload, which AMD claims has less than a 1% error rate, but the new model gets accurate temperatures from those estimations. The thermal model delivers accuracy at or below 2C, in real time. Having more accurate thermal data allows the turbo microcontroller to respond quicker, which should allow for frequencies to scale up and down more effectively.

At the end of the day this should improve performance, although it's difficult to compare directly to Llano since so much has changed between the two APUs. Just as with Llano, AMD specifies nominal and max turbo frequencies for the Trinity CPU/GPU. 

A Beefy Set of Interconnects

The holy grail for AMD (and Intel for that matter) is a single piece of silicon with CPU and GPU style cores that coexist harmoniously, each doing what they do best. We're not quite there yet, but in pursuit of that goal it's important to have tons of bandwidth available on chip.

Trinity still features two 64-bit DDR3 memory controllers with support for up to DDR3-1866 speeds. The controllers add support for 1.25V memory. Notebook bound Trinities (Socket FS1r2 and Socket FP2) support up to 32GB of memory, while the desktop variants (Socket FM2) can handle up to 64GB.

Hyper Transport is gone as an external interconnect, leaving only PCIe for off-chip IO. The Fusion Control Link is a 128-bit (each direction) interface giving off-chip IO devices access to system memory. Trinity also features a 256-bit (in each direction, per memory channel) Radeon Memory Bus (RMB) direct access to the DRAM controllers. The excessive width of this bus likely implies that it's also used for CPU/GPU communication as well.

IOMMU v2 is also supported by Trinity, giving supported discrete GPUs (e.g. Tahiti) access to the CPU's virtual memory. In Llano, you used to take data from disk, copy it to memory, then copy it from the CPU's address space to pinned memory that's accessible by the GPU, then the GPU gets it and brings it into its frame buffer. By having access to the CPU's virtual address space now the data goes from disk, to memory, then directly to the GPU's memory—you skip that intermediate mem to mem copy. Eventually we'll get to the point where there's truly one unified address space, but steps like these are what will get us there.

The Trinity GPU

Trinity's GPU is probably the most well understood part of the chip, seeing as how its basically a cut down Cayman from AMD's Northern Islands family. The VLIW4 design features 6 SIMD engines, each with 16 VLIW4 arrays, for a total of up to 384 cores. The A10 SKUs get 384 cores while the lower end A8 and A6 parts get 256 and 192, respectively. FP64 is supported but at 1/16 the FP32 rate.

As AMD never released any low-end Northern Islands VLIW4 parts, Trinity's GPU is a bit unique. It technically has fewer cores than Llano's GPU, but as we saw with AMD's transition from VLIW5 to VLIW4, the loss didn't really impact performance but rather drove up efficiency. Remember that most of the time that 5th unit in AMD's VLIW5 architectures went unused.

The design features 24 texture units and 8 ROPs, in line with what you'd expect from what's effectively 1/4 of a Cayman/Radeon HD 6970. Clock speeds are obviously lower than a full blown Cayman, but not by a ton. Trinity's GPU runs at a normal maximum of 497MHz and can turbo up as high as 686MHz.

Trinity includes AMD's HD Media Accelerator, which includes accelerated video decode (UVD3) and encode components (VCE). Trinity borrows Graphics Core Next's Video Codec Engine (VCE) and is actually functional in the hardware/software we have here today. Don't get too excited though; the VCE enabled software we have today won't take advantage of the identical hardware in discrete GCN GPUs. AMD tells us this is purely a matter of having the resources to prioritize Trinity first, and that discrete GPU VCE support is coming.

Introduction and Piledriver Overview Mobile Trinity Lineup
POST A COMMENT

271 Comments

View All Comments

  • Stas - Tuesday, May 15, 2012 - link

    I agree with xd_1771. A mid-range CPU from 2 years ago is plenty for any CPU requirements an average user might have (Office, browser, IM, pr0n). The one thing that's been limiting laptops for generations is the GPU. AMD has brought serious graphics to laptop. Not only do you benefit through improvements in gaming and 3D software, with almost every resource intensive application becoming GPU-accelerated, you get better performance all-around. Reply
  • zephxiii - Tuesday, May 15, 2012 - link

    I am using this old Thinkpad T60 with C2D 1.66ghz built in 4/2007 and it is plenty fast enough for regular use lol. The only thing that really sucks about it is the spinning HD in it.

    I use a T61p at home with C2D 2.2ghz and Quadro something HD with SSD and that thing does everything I need it to (Photoshop, lightroom, Internet, flash video etc.).

    Both running Windows 7.
    Reply
  • BSMonitor - Tuesday, May 15, 2012 - link

    Nope, I promise that C2D 1.66Ghz lags for Flash enhanced./Java Runtime environments. Especially multitasking.

    Please, quit defining regular use with acceptance of slow. Drop even Core 2 Quad 9550 in that home PC, and I promise you would not go back.
    Reply
  • Belard - Tuesday, May 15, 2012 - link

    I have a ThinkPad R61 with PDC (Bottom end Core2 with missing cache) at 1.6Ghz. I bought for $550 off the shelf new when VISTA was about 8 months old. It came with XP-Pro, 1GB RAM and more importantly, a matte screen. I use it almost every day and since then I've added 1GB and Windows7 and it runs better than it ever did when it was new.

    Its slow compared to more C2Q Q6600, but the R61 does what I need for a mobile system. I sure don't like using Photoshop on it. But its mostly for browsing, Office apps and xfer of data/work.

    It still faster than ANY Pentium4 class CPU.

    I have an urge to go IvyBridge this year... but my Q6600 doesn't really keep me waiting much (other than video encoding) with 4GB / Win7. Nope, going on vacation this summer out-weighs a new computer. :)
    Reply
  • BSMonitor - Tuesday, May 15, 2012 - link

    Stop telling everyone what CPU is GOOD enough. There truly is software out there that my Core 2 Duo at work lags behind. My Core i7 system at home is remarkable smoother and more responsive. Neither with an SSD. Reply
  • tfranzese - Tuesday, May 15, 2012 - link

    For a user who can't stand to wait, you've got your priorities screwed up if you're not using an SSD on those system. Reply
  • evolucion8 - Tuesday, May 15, 2012 - link

    I wonder which kind of sofware tuns too slow on a C2D. I have a i7 2600K at 4.5GHz, much faster on WinRaR, media encoding, gaming etc. But running everyday tasks like web browsing, office, media playback etc, doesn't feel much different from my Core 2 T9300 and my i7 machine. My laptop does have very good encoding power which is very tolerable, but definitively my i7 destroys it, but considering that my C2D has a 35W TDP I don't loosing some performance for the sake of lower heat dissipation and battery consumption. Reply
  • vegemeister - Tuesday, May 15, 2012 - link

    We were just getting to the point where a CPU could be good for 6-8 years, but then the web developers started making applications and desktop environments. Not to mention the horrors of flash and Java. What Intel giveth, web 2.0 taketh away. Reply
  • medi01 - Thursday, May 17, 2012 - link

    Bullshit.

    Most of the web 2.0 is nowadays "also gotta run on tablets" and no way inhell it's "java based", or "flash based" or CPU intensive.
    Reply
  • seapeople - Tuesday, May 22, 2012 - link

    You people are a little crazy coming up with exotic applications that stress CPUs. It's much simpler than that.

    I'm running a Q2720m with Intel SSD and fiber optic internet, and I notice immediately if I turn turboboost off while browsing standard webpages with Chrome + Adblock. My browsing is noticeably CPU limited, especially in cases where I'm clicking through dozens of large webpages to find a specific page I'm looking at (such as browsing backwards through poorly designed blogs).

    I would detest running something with the single-threaded speed of AMD's latest offerings. Of course, that's why I'm not in that target market.
    Reply

Log in

Don't have an account? Sign up now