Merging CPUs and GPUs

AMD has already outlined the beginning of its CPU/GPU merger strategy in a little product called Fusion. While quite bullish on Fusion, AMD hasn't done a tremendous job of truly explaining the importance of Fusion. Fusion, if you haven't heard, is AMD's first single chip CPU/GPU solution due out sometime in the 2008 - 2009 timeframe. Widely expected to be two individual die on a single package, the first incarnation of Fusion will simply be a more power efficient version of a platform with integrated graphics. Integrated graphics is nothing to get excited about, but it is what follows as manufacturing technology and processor architectures evolve that is really interesting.


AMD views the Fusion progression as three discrete steps:


Today we have a CPU and a GPU separated by an external bus, with the two being quite independent. The CPU does what it does best, and the GPU helps out wherever it can. Step 1, is what AMD is calling integration, and it is what we can expect in the first Fusion product due out in 2008 - 2009. The CPU and GPU are simply placed next to one another and there's minor leverage of that relationship, mostly from a cost and power efficiency standpoint.

Step 2, which AMD calls optimization, gets a bit more interesting. Parts of the CPU can be shared by the GPU and vice versa. There's not a deep level of integration, but it begins the transition to the most important step - exploitation.

The final step in the evolution of Fusion is where the CPU and GPU are truly integrated, and the GPU is accessed by user mode instructions just like the CPU. You can expect to talk to the GPU via extensions to the x86 ISA, and the GPU will have its own register file (much like FP and integer units each have their own register files). Elements of the architecture will be shared, especially things like the cache hierarchy, which will prove useful when running applications that require both CPU and GPU power.

The GPU could easily be integrated onto a single die as a separate core behind a shared L3 cache. For example, if you look at the current Barcelona architecture you have four homogenous cores behind a shared L3 cache and memory controller; simply swap one of those cores with a GPU core and you've got an idea of what one of these chips could look like. Instructions that can only be processed by the specialized core will be dispatched directly to it, while instructions better suited for other cores will be sent to them. There would have to be a bit of front end logic to manage all of this, but it's easily done.




AMD went as far as to say that the next stage in the development of x86 is the heterogeneous processing era. AMD's Phil Hester stated plainly that by the end of the decade, homogeneous multi-core becomes increasingly inadequate. The groundwork for the heterogeneous processing era (multiple cores on chip each with a different purpose) will be laid in the next 2 - 4 years, with true heterogeneous computing coming after 2010.


It's not just about combining the CPU and GPU as we know them; it's also about adding other types of processors and specialized hardware. You may remember that Intel made some similar statements a few IDFs ago, but not nearly as boldly as AMD given that Intel doesn't have nearly as strong of a graphics core to begin integrating. The xPUs listed in the diagram above could easily be things like H.264 encode/decode engines, network accelerators, virus scan accelerators, or any other type of accelerator that's deemed necessary for the target market.


In a sense, AMD's approach is much like that of the Cell processor, the difference being that with AMD's direction the end result would be a much more powerful sequential core combined with a true graphics core. Cell was very much ahead of its time, and by the time AMD and Intel can bring similar solutions to the market the entire industry will be far more ready for them than it was for Cell. Not to mention that treating everything as extensions to the x86 ISA makes programming far easier than with Cell.


Where does AMD's Torrenza fall into play? If you'll remember, Torrenza is AMD's platform approach to dealing with different types of processors in an AMD system. The idea being that external accelerators could simply pop into an AMD processor socket and communicate with the rest of the system over Hyper Transport. Torrenza actually works quite well with AMD's Fusion strategy, because it allows for other accelerators (xPUs if you will) to be put in AMD systems without having to integrate the functionality on AMD's processor die. If there's enough demand in the market, AMD can eventually integrate the functionality on die, but until then Torrenza offers a low cost in-between solution.

AMD drew the parallel to the 287/387 floating point coprocessor socket that was present on 286/386 motherboards. Only around 2 - 3% of 286 owners bought a 287 FPU, while around 10 - 20% of 386 owners bought a 387 FPU; when the 486 was designed it simply made sense to integrate the functionality of the FPU into all models because the demand from users and developers was there. Torrenza would allow the same sort of migration to occur from external socket to eventual die integration if it makes sense, for any sort of processor.

The Road to Acquisition AMD in Consumer Electronics
POST A COMMENT

55 Comments

View All Comments

  • Regs - Friday, May 11, 2007 - link

    Tight lipped does make AMD look bad right now but could be even worse for them after Intel has their way with the information alone. I'm not talking about technology or performance, I'm talking about marketing and pure buisness politics.

    Intel beat AMD to market by a huge margin and I think it would be insane for AMD to go ahead and post numbers and specifications while Intel has more than enough time to make whatever AMD is offering look bad before it hits the shelves or comes into contact with a Dell machine.

    Reply
  • strikeback03 - Friday, May 11, 2007 - link

    quote:

    Apparently Intel suspects something is going on as well. One look at the current prices of the E6600 C2D should confirm this, as its currently half the price of what it was a month ago. Unless, there is something else I am missing, but the Extreme CPUs still seem to be hovering around ~$1000 usd.


    Intel cut the price of all the C2D processors by one slot in the tree - the Q6600 to the former price of the E6700, the E6700 to the former price of the E6600, the E6600 to the former price of the E6400, etc. Anandtech covered this a month or so ago after AMD cut prices.

    quote:

    After a while this could be a problem for the consumer base, and may ressemble something along the lines of how a lot of Linux users view Microsoft, wit htheir 'Monopoly'. In the end, 'we' lose flexability, and possibly the freedom to choose what software that will actually run on our hardware. This is not to say, I buy into this beleif 100%, but it is a distinct possibility.


    I wonder as well. Will it be relatively easy to mix and match features as needed? Or will the offerings be laid out that most people end up paying for a feature they don't want for each feature they do?
    Reply
  • yyrkoon - Friday, May 11, 2007 - link

    quote:

    I wonder as well. Will it be relatively easy to mix and match features as needed? Or will the offerings be laid out that most people end up paying for a feature they don't want for each feature they do?


    Yeah, its hard to take this peice of 'information' without a grain of salt added. On one hand you have the good side, true integrated graphics (not this shitty thing of the past, hopefully . . .), with full bus speed communication, and whatnot, but on the other hand, you cut out discrete manufactuers like nVidia, which in the long run, we are not only talking about just discrete graphics cards, but also one of the best/competing chipset makers out there.
    Reply
  • Regs - Friday, May 11, 2007 - link

    The new attitude Anand displays with AMD is more than enough and likely the whole point of the article.

    AMD is changing for a more aggressive stance. Something they should of done years ago.

    Reply
  • Stablecannon - Friday, May 11, 2007 - link

    quote:

    AMD is changing for a more aggressive stance. Something they should of done years ago.


    Aggressive? I'm sorry could you refer me to the article that gave you that idea. I must have missed while I was at work.
    Reply
  • Regs - Friday, May 11, 2007 - link

    Did you skim?

    There were at least two whole paragraphs. Though I hate to qoute so much content, I guess it's needed.

    quote:

    Going into these meetings, in a secluded location away from AMD's campus, we honestly had low expectations. We were quite down on AMD and its ability to compete, and while AMD's situation in the market hasn't changed, by finally talking to the key folks within the company we at least have a better idea of how it plans to compete.



    quote:

    There's also this idea that coming off of a significant technology lead, many within AMD were simply complacent and that contributed to a less hungry company as a whole. We're getting the impression that some major changes are happening within AMD, especially given its abysmal Q1 earnings results (losing $611M in a quarter tends to do that to a company). While AMD appeared to be in a state of shock after Intel's Core 2 launch last year, the boat has finally started to turn and the company that we'll see over the next 6 - 12 months should be quite different.

    Reply
  • sprockkets - Friday, May 11, 2007 - link

    What is there that is getting anyone excited to upgrade to a new system? We need faster processors and GPUs? Sure, so we can play better games. That's it?

    Now we can do HD content. I would be much more excited about that except it is encumbered to the bone by DRM.

    I just wish we had a competent processor that only needs a heatsink to be cooled.

    quote:

    AMD showed off the same 45nm SRAM test vehicle we saw over a year ago in Dresden, which is a bit bothersome.


    Not sure what you are saying since over a year ago they would have been demoing perhaps 65nm cells, but whatever.

    And as far as Intel reacting, they are already on overdrive with their product releases, FSB bumps, updating the CPU architecture every 2 years instead of 3, new chipsets every 6 months, etc. I guess when you told people we would have 10ghz Pentium 4's and lost your creditbility, you need to make up for it somehow.

    Then again, if AMD shows off benchmarks, what good would it do? The desktop varients we can buy are many months away.
    Reply
  • Viditor - Saturday, May 12, 2007 - link

    quote:

    Not sure what you are saying since over a year ago they would have been demoing perhaps 65nm cells, but whatever

    In April of 2006, AMD demonstrated 45nm SRAM. This was 3 months after Intel did the same...
    Reply
  • sprockkets - Friday, May 11, 2007 - link

    To reply to myself, perhaps the Fusion project is the best thing coming. If we can have a standard set of instructions for cpu and gpu, we will no longer need video drivers, and perhaps we can have a set that works very low power. THAT, is what I want.

    Wish they talked more of DTX.
    Reply
  • TA152H - Friday, May 11, 2007 - link

    I agree with you about only needing a heat sink, I still use Pentium IIIs in most of my machines for exactly that reason. I also prefer slotted processors to the lame socketed ones, but they cost more and are unnecessary so I guess they aren't going to come back. They are so much easier to work with though.

    I wish AMD or Intel would come out with something running around 1.4 GHz that used 10 watts or less. I bought a VIA running at 800 MHz a few years ago, but it is incredibly slow. You're better off with a K6-III+ system, you get better performance and about the same power use. Still, it looks like Intel and AMD are blind to this market, or minimally myopic, so it looks like VIA/Centaur is the best hope there. The part I don't get is why they superpipeline something for high clock speed when they are going for low power. It seems to me an upgraded K6-III would be better at something like this, since by comparison the Pentium/Athlon/Core lines offer poor performance for the power compared to the K6 line, considering it's made on old lithography. So does the VIA, and that's what it's designed for. I don't get it. Maybe AMD should bring it back as their ultra-low power design. Actually, maybe they are. On a platform with reasonable memory bandwidth, it could be a real winner.
    Reply

Log in

Don't have an account? Sign up now