Final Words

As Cell-like as Intel's Teraflop Research Chip may be, it's never going to be a product. In fact, there's no rhyme or reason to choosing 80 cores as the magic number, it was simply the number of cores Intel was able to cram into the design given the target die size. At the same time, the teraflop chip isn't here to set any performance records either, its sole purpose is as a test vehicle for its tiled network approach, power management with many cores, 3D stacked memory, and as a whole for the future of multi-core processors.

Intel stated that the two next steps for this research chip are to introduce a 3D stacked die, which we're suspecting Intel has already made significant progress on, and to introduce some more capable cores.

Intel's goal with this chip isn't to build a FP monster, but rather a fast general purpose chip. Chances are that what this chip evolves into won't be an array of 80 general purpose processors, but maybe an 8 or 16 core chip complete with x86 cores, specialized hardware, and 3D stacked memory.

What's the time to fruition for these technologies? Intel estimates 5 - 10 years before we see the tangible benefits of this research in products. By the end of the decade we can expect quad-core to be mainstream on the desktop, so by 2015 you can expect that many of these many-core problems we recapped here today will be a reality. Intel's Tera-scale research that we've seen today will hopefully mitigate those problems, leading to a much smoother transition to the era of tera.

Clocks and Power Management
Comments Locked

25 Comments

View All Comments

  • creathir - Monday, February 12, 2007 - link

    With all of this wonderful multi-core bliss, is the software side of things. Multicore means the software needs to be written asymentrically. This will be an incredibly hard challenge, especially on real time applications such as video games. The concept is fantastic, but the proof is in the pudding as they say. I do find Intel's routing technology to be quite interesting, especially the idea of stacking the L1/L2 memory on top (or below rather;)) the cores. The interconnect on them, how would this work exactly I wonder? Should be interesting to see what all 3 of these companies come out with in the coming years. I suppose the nay sayers of Moore's law will be once again disproven...

    - Creathir
  • Goty - Sunday, February 11, 2007 - link

    So basically it's a Cell processor on steroids. Other than the chip stacking, what's so new about it? People have been talking about 3D packaging for a year or two now, and not just Intel.
  • SocrPlyr - Monday, February 12, 2007 - link

    In a way, yes. And in a lot of ways, no. Yes the individual tiles are floating point units, but this chip is not meant to be a functional replacement for anything like the cell is trying to be. You really cannot compare this chip to anything available on the market. It is only a proof of concept. The choice of tiles that are floating point units was probably due to the fact that ultra high performance needs generally are nearly completely FP dependent. When testing and playing with this thing those types of applications are easy to come by. To be honest this chip seems a lot like a DSP chip, and mentioning those you will realize that the Cell processor is anything but an altered one of those. Really on all sides there has been little technology that is completely new, just better implementations.
  • oldhoss - Sunday, February 11, 2007 - link

    I'll bet that SOB would give two 8800GTX's a run for their money....CPU-limited be damned! ;-D
  • mino - Sunday, February 11, 2007 - link

    "Since the per-die area doesn't increase, the number of defects don't go up per die."

    Any sensible person knows that defect-rate is(mostly) dependent on the number of functional units(i.e. transistors), provided the defect-rate off a single unit is set.

    The fact that it is NOW mostly tied to die-area is caused exactly by the fact we do NOT use stacked-die aproach yet.

    Otherwise a nice news piece. Thanks AT.
  • mino - Sunday, February 11, 2007 - link

    sorry for typpos...
  • notposting - Sunday, February 11, 2007 - link

    quote:

    The obvious solution to this problem is to use wider front side and memory buses that run at higher frequencies, but that solution is only temporary. Intel's slide above shows that a 6-channel memory controller would require approximately 1800 pins, and at that point you get into serious routing and packaging constraints. Simply widening the memory bus and relying on faster memory to keep up with the scaling of cores on CPUs isn't sufficient for the future of microprocessors.


    The picture above this shows the Terascale slide:
    http://images.anandtech.com/reviews/cpu/intel/tera...">http://images.anandtech.com/reviews/cpu/intel/tera...
  • sprockkets - Sunday, February 11, 2007 - link

    We have a solution to the problem of ever increasing CPU speed. My question is, who here needs it?

    For those who need to open 80 Firefox tabs, video encoding, virus scanning and watching a HD movie, at the same time?

    Data sets did need to get bigger, but check this out: Music files started out at small sampling rates till about Win98 they got to the cd standard. It stopped there since no one needs it bigger than that, that is, 44.1khz and 16 bit resolution. If you can hear 96/192khz 24bit music better, fine, but we have others saying that 128kbps mp3 was cd quality.

    Video resolutions made their way from 640x480 to now around 1600x1200, and widescreen varients of that. Color depth sits at around 32bit. Can you see it improving beyond that?

    OK, so we can what, go 3D now, holographic?

    Sorry to you Intel and AMD, but the vast majority of the people you sell your technology to can live off a $30 processor and $50 of RAM, the smallest HDD, and a $30 optical drive which does everything.

    Would be cool to see a motherboard with built in DDR3 or 4 memory for the cpu/gpu AMD Fusion core, and have 2GB of it, with 32GB of flash built on as well. Let's go for silent computing, you know, back in the day when all processors only had tiny heatsinks on them!!!
  • joex444 - Monday, February 12, 2007 - link

    What part of the article was confusing to you?

    NOT FOR RETAIL SALE, COMMERCIAL USES ONLY.

    I got the idea, guess you didn't. PWNT!
  • Larso - Monday, February 12, 2007 - link

    So, why did we ever bother invent plastic materials? Or why invent the laser? The laser is a good example of an invention that was expected to be a useless curiosity but turned out to be hugely usefull.

    But this case isn't even comparable to that. There are indeed problems waiting to be solved with this solution. All servers with more than a handfull of CPU's could be cut down in size and power usage tremendously, and what about supercomputers? They are going to be extra extremely powerfull when they change to this kind of cpu's.

    And by the way, you have to be quite narrowminded to not see the (sales) potential of supercomputing at home. Lets have computer games with scary intelligent AI's :)

Log in

Don't have an account? Sign up now