Improved Turbo

Trinity features a much improved version of AMD's Turbo Core technology compared to Llano. First and foremost, both CPU and GPU turbo are now supported. In Llano only the CPU cores could turbo up if there was additional TDP headroom available, while the GPU cores ran no higher than their max specified frequency. In Trinity, if the CPU cores aren't using all of their allocated TDP but the GPU is under heavy load, it can exceed its typical max frequency to capitalize on the available TDP. The same obviously works in reverse.

Under the hood, the microcontroller that monitors all power consumption within the APU is much more capable. In Llano, the Turbo Core microcontroller looked at activity on the CPU/GPU and performed a static allocation of power based on this data. In Trinity, AMD implemented a physics based thermal calculation model using fast transforms. The model takes power and translates it into a dynamic temperature calculation. Power is still estimated based on workload, which AMD claims has less than a 1% error rate, but the new model gets accurate temperatures from those estimations. The thermal model delivers accuracy at or below 2C, in real time. Having more accurate thermal data allows the turbo microcontroller to respond quicker, which should allow for frequencies to scale up and down more effectively.

At the end of the day this should improve performance, although it's difficult to compare directly to Llano since so much has changed between the two APUs. Just as with Llano, AMD specifies nominal and max turbo frequencies for the Trinity CPU/GPU. 

A Beefy Set of Interconnects

The holy grail for AMD (and Intel for that matter) is a single piece of silicon with CPU and GPU style cores that coexist harmoniously, each doing what they do best. We're not quite there yet, but in pursuit of that goal it's important to have tons of bandwidth available on chip.

Trinity still features two 64-bit DDR3 memory controllers with support for up to DDR3-1866 speeds. The controllers add support for 1.25V memory. Notebook bound Trinities (Socket FS1r2 and Socket FP2) support up to 32GB of memory, while the desktop variants (Socket FM2) can handle up to 64GB.

Hyper Transport is gone as an external interconnect, leaving only PCIe for off-chip IO. The Fusion Control Link is a 128-bit (each direction) interface giving off-chip IO devices access to system memory. Trinity also features a 256-bit (in each direction, per memory channel) Radeon Memory Bus (RMB) direct access to the DRAM controllers. The excessive width of this bus likely implies that it's also used for CPU/GPU communication as well.

IOMMU v2 is also supported by Trinity, giving supported discrete GPUs (e.g. Tahiti) access to the CPU's virtual memory. In Llano, you used to take data from disk, copy it to memory, then copy it from the CPU's address space to pinned memory that's accessible by the GPU, then the GPU gets it and brings it into its frame buffer. By having access to the CPU's virtual address space now the data goes from disk, to memory, then directly to the GPU's memory—you skip that intermediate mem to mem copy. Eventually we'll get to the point where there's truly one unified address space, but steps like these are what will get us there.

The Trinity GPU

Trinity's GPU is probably the most well understood part of the chip, seeing as how its basically a cut down Cayman from AMD's Northern Islands family. The VLIW4 design features 6 SIMD engines, each with 16 VLIW4 arrays, for a total of up to 384 cores. The A10 SKUs get 384 cores while the lower end A8 and A6 parts get 256 and 192, respectively. FP64 is supported but at 1/16 the FP32 rate.

As AMD never released any low-end Northern Islands VLIW4 parts, Trinity's GPU is a bit unique. It technically has fewer cores than Llano's GPU, but as we saw with AMD's transition from VLIW5 to VLIW4, the loss didn't really impact performance but rather drove up efficiency. Remember that most of the time that 5th unit in AMD's VLIW5 architectures went unused.

The design features 24 texture units and 8 ROPs, in line with what you'd expect from what's effectively 1/4 of a Cayman/Radeon HD 6970. Clock speeds are obviously lower than a full blown Cayman, but not by a ton. Trinity's GPU runs at a normal maximum of 497MHz and can turbo up as high as 686MHz.

Trinity includes AMD's HD Media Accelerator, which includes accelerated video decode (UVD3) and encode components (VCE). Trinity borrows Graphics Core Next's Video Codec Engine (VCE) and is actually functional in the hardware/software we have here today. Don't get too excited though; the VCE enabled software we have today won't take advantage of the identical hardware in discrete GCN GPUs. AMD tells us this is purely a matter of having the resources to prioritize Trinity first, and that discrete GPU VCE support is coming.

Introduction and Piledriver Overview Mobile Trinity Lineup
Comments Locked

271 Comments

View All Comments

  • AlB80 - Tuesday, May 15, 2012 - link

    No.
    It's official information.
  • JarredWalton - Tuesday, May 15, 2012 - link

    Except according to CLInfo, it does. Nice try?
  • AlB80 - Tuesday, May 15, 2012 - link

    Oops. It was.
    Now it has fp64 = 1/16 fp32.
  • princehamlet - Tuesday, May 15, 2012 - link

    I was constantly refreshing the page at 12 AM! Couldn't wait for the reviews to be posted after the embargo was lifted :D.
  • BSMonitor - Tuesday, May 15, 2012 - link

    Why? Nothing earthshattering here. AMD is scalping resources from the CPU to add TDP room and die space for more of it's bulky Radeon shaders.

    It's like buying a laptop from 2004, with a DX11 upgrade.

    AMD has the "good enough" part backwards. People want their laptop to be responsive when doing work, watching movies and browsing, etc. CPU intensive tasks. The good enough part, in regards to laptops would be the gaming. No one expects 60fps at 1080 out of laptop sitting on a plane flying somewhere.

    Way to capture the hearts of the 1% of the 1% of people looking for great gaming from their $500 laptop.
  • Articuno - Tuesday, May 15, 2012 - link

    Considering the CPU part is better than mobile Core 2 Duo parts (and thousands upon thousands of people are still using laptops with C2Ds) and the GPU part is several orders of magnitude better than Intel's best, I'd say buying an Intel laptop is like buying a laptop from 2004: expensive and extremely low price/performance for what you get.
  • JarredWalton - Tuesday, May 15, 2012 - link

    Whoa... several orders of magnitude? So, like, 1000X better? Because if anyone can offer up a GPU that's 1000 times faster than even GMA 4500, I'd take it! Turning down the hyperbole dial: AMD still has better drivers than Intel, but it's more like 20% better (just to grab a nice number out of thin air). Trinity's GPU is about 20% faster than HD 4000 as well, so that makes Trinity's GPU a whopping 44% better than "Intel's best".

    Now if you want to talk about the best Core 2 era IGP, then we'd be looking at more like an order of magnitude improvement. GMA 4500MHD scores around 1000 in 3DMark06, in case you were wondering (http://www.anandtech.com/show/2818/6). I know, it's only 3DMark -- still, call it 500 as a penalty for lousy drivers and HD 7660G is still "only" about 20X better.

    /meaningless debate
  • Articuno - Tuesday, May 15, 2012 - link

    Fair enough, kind of a knee-jerk reaction out of me there. Though I'm guessing the APU will be cheaper than the i7s it's going up against even without a discrete card added on top of them, so it's got very nice price/performance potential.
  • jensend - Tuesday, May 15, 2012 - link

    Yes, his "orders of magnitude" was hyperbole- but Intel's benchmark scores esp 3dmark really haven't reflected how awful their GPUs have been. The performance difference in real games was usually much bigger than that in synthetic benchmarks. You already mentioned driver issues. Even if you could get halfway decent performance out of some games, image quality was often a huge problem. If AMD or nV had offered that crappy of image quality they would have been totally excoriated in the press for cheating in order to inflate benchmarks; people didn't do that to Intel- probably because it would have felt like beating a handicapped child.

    But Sandy had some real improvements and then Ivy Bridge really turned things around for Intel. Beyond the performance improvements, after years of making excuses for their AF and saying that AA was unnecessary, they finally stopped making excuses and fixed them. Trinity is faster, but anybody who says that Ivy Bridge's graphics don't offer Trinity's any competition is badly mistaken.
  • Spunjji - Tuesday, May 15, 2012 - link

    Indeed - I was honestly pleasantly surprised to see HD4000 sitting so high in the charts. Finally I won't need to start warning people against Intel notebooks!

    ...except for the small problem of HD2500. Still, improvement is improvement.

Log in

Don't have an account? Sign up now