Things That Could Go Wrong

I had to write this section because as strong as Intel has been executing these past couple of years, we must keep in mind that in the GPU market, Intel isn't only the underdog, it's going up against the undefeated. NVIDIA, the company that walked into 3dfx's house and walked away with its IP, the company who could be out engineered and outperformed by ATI for an entire year and still emerge as dominant. This is Intel's competition, the most Intel-like of all of the manufacturers in the business, and a highly efficient one at that.

Intel may benefit from the use of its advanced manufacturing fabs in making Larrabee, but it is also burdened by them. NVIDIA has been building GPUs, some quite large, without ever investing a dime in building its own manufacturing facility. There's much that could go wrong with Larrabee, the short list follows:

Manufacturing, Design and Yield

Before we get to any of the GPU-specific concerns about Larrabee, there's always the basics when making any chip. There's always the chance that it could be flawed, it might not reach the right clock speeds, deliver the right performance and perhaps not yield well enough. Larrabee has a good chance of being Intel's largest die produced in desktop-like volumes, while Intel is good at manufacturing we can't rule these out as concerns.

Performance

As interesting as Larrabee sounds, it's not going to arrive for another year at least. NVIDIA should have even higher performing parts out by then, making GT200 look feebile by comparison. If Intel can't deliver a real advantage over the best from NVIDIA and AMD, Larrabee won't get very far as little more than a neat architecture.

Drivers and Developer Relations

Intel's driver team now is hardly its strongpoint. On the integrated graphics side we continue to have tons of issues, even as we're testing the new G45 platform we're still bumping into many driver related issues and are hearing, even from within Intel, that the IGP driver team leaves much to be desired. Remember that NVIDIA as a company is made up of mostly software engineers - drivers are paramount to making a GPU successful, and Intel hasn't proved itself.

I asked Intel who was working on the Larrabee drivers, thankfully the current driver team is hard at work on the current IGP platforms and not on Larrabee. Intel has a number of its own software engineers working on Larrabee's drivers, as well as a large team that came over from 3DLabs. It's too early to say whether or not this is a good thing, nor do we have any idea of what Intel's capabilities are from a regression testing standpoint, but architecture or not, drivers can easily decide the winner in the GPU race.

Developer relations are also very important. Remember the NVIDIA/Assassin's Creed/DirectX 10.1 fiasco? NVIDIA's co-marketing campaign with nearly all of the top developers is an incredibly strong force. While Intel has the clout to be able to talk to game developers, we're bound to see the clash of two impossibly strong forces here.

The Future of Larrabee: The Many Core Era and Launch Questions Final Words
Comments Locked

101 Comments

View All Comments

  • skochnet - Monday, August 4, 2008 - link

    I have been a daily reader of Anandtech and computer tech enthusiast for the past 6 years. I found this article so interesting and well written that I felt compelled to signup for an account today to post my appreciation for it. The depth of this article was fascinating. This could be a leap ahead technology which would change and potentially restructure the industry as it stands now if successful. …or not make the grade and the industry players all continue the tug-of-war. I especially enjoy your speculation. Your group is no doubt privy to a unique vantage point that makes these thoughts even more valuable and interesting. Thank you.
  • DerekWilson - Monday, August 4, 2008 - link

    We really appreciate the kind words.

    I'm glad you enjoyed the article, and this is definitely and exciting development that I think -- whether it succeeds or fails -- we will all have our eyes on.
  • iocedmyself - Monday, August 4, 2008 - link

    Actually crysis is the point, or a good example anyway since intel has been touting that larrabee will be something like up to 2.5 - 5 times faster than traditional present day gpu solutions. I didn't think they had been working on this for 4 years, and while it may have seemed like a good idea back then at a time when they hadn't launched a worth while product in 4 or 5 years: they just aren't that innovative.

    HP and intel teamed up in 94 to develop the original IA-64, spent billions in developement launching it 3 years late and making platform sales in the triple digits...total.

    During that time they also started developing the Timna in 97, what was supposed to be for the low-power sub-$600 desktop system bracket and their "first" cpu with an IMC. To achieve this they designed the IMC for use with Rambus memory...they pushed launch back several times to nearly 2 years after projected, during which they redesigned the IMC for use with SDRAM, though it ended up having some pesky fatal design flaw and they scrapped it just before the launch of...

    Pentium 4! mmm...netburst, tastes like cowpie. Intel originally quoted netburst as being able to scale to 10Ghz, but they were close there weren't they? 3.8ghz is close...isn't it? It's almost half-way...besides, it doubled as a hotplate!

    It has taken them 5 years to reach the point of being able to essentially copy AMD's IMC design, to luanch on a chip which they licsence the x86-64 code from...AMD. Intel may do great work when it comes to designing new motherboard and networking standards (sata, Pci-e, and ehternet) but a high performance 32bit chip launched months before the consumer 64bit OS isn't an achievment in my view, so much as a reminder of the success they enjoyed in the P4 days.


    From a performance standpoint, it just doesn't seem realistic. The HD4870xt can surpass 1 teraflop of computations on a single 55nm die, where as Intel's $1000+ highest performing quad-core chips clock in at 38-51 gigaflops or 9.5-12.75 gigaflop per core.

    Their 80 core Terascale which used a 65nm fab if i'm not mistaken hit 1 teraflop clocked at 3.2ghz (12.5g Gflop/core) and 1.8 teraflop when clocked up to 5.1ghz (22.5 Gflop/core)


  • ltcommanderdata - Monday, August 4, 2008 - link

    Perhaps it's not so, but it seems to me that Larrabee is quite similar to the SPEs in Cell. Simplified cores based on a common ISA (x86/PPC) optimized for floating point/vector ops. It might be interesting to compare the Cell's and Larrabee's architecture and eventually performance when the products are released. I believe Toshiba is already incorporating the Cell as the Spurs Engine accelerator in notebooks so I can see it running into Larrabee. Might be a return on the x86/PPC debates.
  • Lux88 - Monday, August 4, 2008 - link

    I'm glad Intel strongly believes they can provide a solution. Novel approaches are always welcome :)!

    But there are couple of things that keep me from becoming overly optimistic:
    1. Itanium also relied (relies) on very smart compilers to produce the optimal machine code. Didn't quite happen.

    2. Dynamic compilers, i.e translationg DirectX to Larrabee on the fly, can't be very smart because this translation has to run in "real time".

    3. Intel seems to be heavily touting raytracing. Again, I'm glad they are doing this. But it seems to confirm that they know they can't have a sellar win by just rendering "same old DirectX" through additional layers.

    In addition to hardware, they have to juggle with drivers, compilers and libraries. Also came up with software renderer. Also support developers to code great apps on Larrabee (I bet Carmack can't wait to try out his octree-renderer). Quite a number of moving targets to nail down...
  • DerekWilson - Monday, August 4, 2008 - link

    1) Itanium does do certain things very very well -- it's just not the future of the desktop.

    2) note that we did not use the word emulation or translation at any point -- Intel is NOT doing this with DirectX. It's just like any other API: the DirectX functions that must be implemented in a driver will be implemented in code written for Larrabee as opposed to code written for GT200 or RV770 ... Imagine some other API or even just a DLL with some functions in it -- it's all the same no matter what hardware it's written for. In some cases we'll see the actual implementation just set registers and issue a single command to the hardware to get something done. In Larrabee's case, functions to perform any complex operation will have to be called, but they are essentially doing the same thing.

    When I first go started on OpenGL, it was running in software. Hardware companies came along and implemented OpenGL functions in a way that used their hardware. These APIs are hardware independent (essentially).

    3) the raytracing talk is, to a degree, posturing. raytracing scales well with traditional CPUs. rasterizers fit well on wide vector hardware. Larrabee happens to be both ... so ... I think they are interested in the long term. honestly, when hybrid renderers come along that combine raytraceing for specific effects inside a rasterizer we'll see some really cool things. I think this is where Intel is going.

    NVIDIA even showed off some hybrid tech that could run on their hardware.

    ...

    ...

    Intel does have a lot of hurdles to get past, and that is definitely worth pointing out.
  • mars777 - Monday, August 4, 2008 - link

    Just one curiosity:

    A predicted 64 core Larabee would contain 64x32KB of L1 cache and 64x256KB of L2 cache.

    - 2048 KB of L1 + 16328 KB L2 -

    Given a chess field configuration on silicon this lead to an abnormal die size with too much parallel leaking lines where cache coherency will be impossible or given a 4/8 block configuration which leads slower L2 cache (but this means these are not just cores sticked on silicon but rather a custom core which defeats the purpose of the project and makes this a pretty $$ solution).

    IMHO Larabee will probably work out but will be nothing to cheer about, probably a pushed up product that will eventually die out slowly (itanium...).
  • DerekWilson - Monday, August 4, 2008 - link

    itanium has what? 24mb of on die cache? Large cache is not unreasonable for something like this -- but you are forgetting register space and the fact that the L1 has both 32k data and 32k instruction (so 64 cores would be 4MB of L1)

    The L2 cache is segmented so that each core can only directly access 256kb. The arrangement can be quite flexible because of this. Cache coherency is maintained through the ring bus. if one core needs data being used by another core, in the L2, it goes through the ring. at least that's my understanding.

    I apologize if we didn't do a good enough job in the article, but this isn't just a solution where Intel wants to drop stock cores on a die -- everything is custom from the scalar and vector processor upto the internal memory bus and added fixed function logic.

    the project has been in development for 4 years and is not meant to be cheap -- intel is putting a lot into it.

    by the way -- i still think 32 cores is the sweet spot for launch based on the data Intel provided -- I don't think they'll target a larger size off the bat.
  • Griswold - Monday, August 4, 2008 - link

    One of the few comments here that actually make sense.
  • FujiT - Monday, August 4, 2008 - link

    Some if you just don't get it.

    It's not about whether or not it can play crysis with 100 FPS and it's not as much about whether it can compete with AMD/nVidia (although that's important too).

    I see this chip as a beginning of a new revolution in computing. It reminds me a lot of a cell processor (although i don't know that much about architecture) where a smarter CPU will tell the dumber CPUs what to do. The ability to have a many core CPU with a mixture of really smart and dumber, but FP optimized cores will really make stuff like rendering a lot faster on a CPU, and would take programs such as F@H to the next level. The added perk is the fact that it's all x86 as anand pointed out.

Log in

Don't have an account? Sign up now