SSE128

AMD Architecture Comparison
K8 Barcelona
SSE Execution Width 64-bit 128-bit
Instruction Fetch Bandwidth 16 bytes/cycle 32 bytes/cycle
Data Cache Bandwidth 2 x 64-bit loads/cycle 2 x 128-bit loads/cycle
L2/Northbridge Bandwidth 64 bits/cycle 128 bits/cycle
FP Scheduler Depth 36 Dedicated x 64-bit ops 36 Dedicated x 128-bit ops

Many of the "major" changes to Barcelona were driven by one significant change: what AMD is calling SSE128. In the K8 architecture AMD can execute two SSE operations in parallel; however the SSE execution units are only 64-bits wide. For 128-bit SSE operations, the K8 had to handle them as two 64-bit operations. This also means that when a 128-bit SSE instruction is fetched, it is first decoded into two micro-ops (one for each 64-bit half of the instruction), thus taking up an extra decode port for a single instruction.

Barcelona widens the execution units that handle SSE operations from 64-bits to 128-bits, so now 128-bit SSE operations don't have to be broken up into two 64-bit operations. This also means that you get more usable decode bandwidth since 128-bit SSE instructions now map to a single micro-op instead of two. The FP scheduler can now handle these 128-bit SSE operations as well.

It's the increase to SSE execution width that drove a number of other changes within the core. Since you effectively have more decode bandwidth when executing 128-bit SSE instructions AMD discovered a new bottleneck: instruction fetch bandwidth. These 128-bit SSE instructions tend to be quite large, and in order to maximize the number decoded in parallel the Barcelona core can now fetch 32-bytes per cycle, up from 16-bytes in K8. The 32B instruction fetch not only benefits SSE code but also seems to benefit integer code as well. Bigger instructions in general will see a performance boost here.

Now that you can fetch and decode more instructions, you need to be able to get more data to the execution core and thus AMD widened the interface between the L1 data cache and Barcelona's SSE registers. Barcelona can now perform two 128-bit SSE loads per cycle from the L1-D cache compared to two 64-bit loads per cycle in K8. AMD then widened the interface between the L2 cache and the memory controller so that now 128-bits can be transferred per cycle, once again to balance out all of the aforementioned changes.

The culmination of the SSE128 improvements is very similar to some of the changes made in the Yonah to Merom transition. Prior to Conroe/Merom, Yonah could not keep up with AMD's K8 when it came to FP/SSE performance. Almost a year and a half ago we did an article where we compared AMD's K8 to Intel's Yonah running at the same clock speed. While Yonah was able to equal the K8's performance in general applications, professional 3D rendering and games, it could not compete when it came to video encoding.

There were a number of SSE performance improvements made to Yonah but it wasn't until Intel's Core 2 processors that Intel was really able to outperform AMD in our video encoding tests. Whether the improvements were due to the single cycle SSE throughput introduced in Core 2 or the wider front end or a combination of both remains to be seen. Although it's difficult to compare specs between two very different architectures, encoding performance is a sore spot for AMD today, and it's something that the SSE128 changes can only help.

The Chip Core Tune-up
Comments Locked

83 Comments

View All Comments

  • JarredWalton - Thursday, March 1, 2007 - link

    Games have quite a lot of LOAD instructions, like most programs, as well as plenty of branches (esp. in the AI routines). Most likely the boost that Core 2 gets is due in a large part to the better instruction reordering and branch prediction, although the cache and prefetchers probably help as well. Given AMD was better than NetBurst due to memory latency, through in better OOE (Out of Order Execution) logic and keep the improved latency and they should do pretty well.

    Naturally, everything at this point is purely speculation, but in the next few months we should start to get a better idea of what's in store and how it will perform. One problem that still remains is that even if AMD can be competitive clock-for-clock, Intel looks primed to be able to go up to at least 3.6 GHz dual core and 3.46 GHz quad core if necessary. AMD has traditionally not reached clock speeds nearly as high as Intel, possibly due in part to having more metal layers (speculation again - process tech and other features naturally play a role), so if they release 2.9GHz Barcelona at $1000 you can pretty much guarantee Intel will launch 3.2 and/or 3.46 GHz Kentsfield (and/or FSB1333 3.33 GHz).

    On the bright side, at least things should stay interesting in the CPU world. :D
  • yyrkoon - Thursday, March 1, 2007 - link

    Yes, interresting indeed, but from experience, AMD has always been too vocal in what they plan on doing, especially during the times they are in a 'rut'.

    What this usually means to me, is that AMD is trying to blow smoke up our backsides, we'll see though.

    Keep in mind, my main desktop system, and my backup server for that matter, both are AMD systems. The phrase "cost effective" applies here.
  • kilkennycat - Thursday, March 1, 2007 - link

    Yesterday, Intel announced that they were converting a fourth fab to 45nm. A great deal of confidence in that process. And a few days ago they announced desktop shipments of Penryn-based CPUs pulled forward into 2007. Looks as if AMDs 'window of opportunity' is likely to be very small. IBM has not yet announced a successful implementation of a RAM on their 45nm process. Intel had their RAM design on 45nm up and running late 2005.
  • archcommus - Thursday, March 1, 2007 - link

    True but the move to 45 nm might not make a huge difference in real world performance, just like the move to 65 nm didn't for AMD. Their next full blown architecture will still be a ways off.
  • Roy2001 - Thursday, March 1, 2007 - link

    Dislike AMD's move to 65nm process, move to 45nm has shown that Penryn would eats less power and runs faster thanks to its high K material and metal gate.
  • smitty3268 - Thursday, March 1, 2007 - link

    Every process shows that in theory before chips are actually being made on it. We'll see what actually happens when Penryn is released, not before.
  • chucky2 - Thursday, March 1, 2007 - link

    Has AMD given any indication of how probable dropping an Agena or Kuma CPU into an existing AM2 motherboard will go?

    Especially AMD's own newly released 690G or the upcoming nVidia MCP68?

    Chuck
  • mamisano - Thursday, March 1, 2007 - link

    It has been stated in the past that AM2+ based products will run in AM2 based boards. The limitation, if I understand it correctly, will be the lack of support of the new power features.

    Someone correct me if I am wrong :)
  • chucky2 - Thursday, March 1, 2007 - link

    Then it should be no problem for AMD to confirm through AnandTech that this is the case.

    Surely if Barcelona is this close to shipping (only a few months away), AMD must know if Agena and/or Kuma will work in current AM2 motherboards, especially their own 690 series their just about to release.

    All I'm asking for is a definite either way, it shouldn't be that hard for AMD to do at this point.

    Chuck
  • mino - Friday, March 2, 2007 - link

    AMD stated PUBLICLY to anyone who listened that AM2+ stuff will plug into AM2, just BIOS update needed.

    Why should they react to any consumer who ask on some forum the same question every second week ?

    Most important is they said it WILL(not "may") work with AM2-spec boards to big Tier 1 OEM's.
    They can not make it incompatible therefore. They would be out of bussines in no time.

Log in

Don't have an account? Sign up now