The Awesome Potential of Fully Programmable Graphics

Certainly we can't judge the applicability and impact Larrabee will have until we see how it handles real-world applications. But we absolutely cannot write off such a giant as Intel when they throw their chips into the pot. Some of the current graphics hardware establishment have tried to suggest to us that Intel is not in touch with the current development community and that the only reason some developers are excited about the extensive low level programmability of Larrabee is because they are nostalgic for the old days of graphics programming where it was all about the software renderer.

I don't think anyone is under the illusion that DirectX and OpenGL performance are irrelevant for Larrabee. If Intel fails at delivering equivalent or greater price/performance in games and applications that use these programming APIs, then no matter how well the hardware could be used for any software engine it will fail. But the potential to customize every part of the rendering pipeline, the capability of supporting a software renderer with the same level of performance as if the hardware was customized to it, adds a level of value to the development community that will absolutely blow away anything NVIDIA or AMD can currently (or will for the foreseeable future) offer.

Re-opening the door for Tim Sweeney, John Carmack, Michael Abrash, and other pioneers and visionaries in the field of 3D graphics to once again have the freedom to take a piece of hardware that can offer the kind of data parallel speed that has heretofore been limited to the GPU and literally do anything they want with it is something to be excited about. Limited much less by the physical design of the hardware to once again only be limited by the performance of any given segment of code could help speed up the transition from SIGGRAPH to games. Larrabee could help create a new wellspring of research, experimentation and techniques for real-time graphics, the likes of which have not been seen since the mid-to-late 1990s.

We have absolutely been seeing the current graphics hardware giants move toward more flexibility and programmability. But if Intel is able to effectively leap-frog their slow trudge toward true general purpose programming DX version by DX version, we will see the end of an era where games are feature limited by hardware. No longer will we need new hardware to handle a new DX version with new techniques and effects: we would only need a driver update to add support for the new API. The only obstacle to running games using future APIs will be performance. The only reason to upgrade in the future will be speed. It will be a different world, altogether different than anything we've known or experienced before yet incredibly similar to the roots from which the industry was born.

It is an exciting time to be in the field of computer graphics.

A Tribute to Michael Abrash: The ISA Thread and Data Management: It's Time to Blow Your Mind
Comments Locked

101 Comments

View All Comments

  • Shinei - Monday, August 4, 2008 - link

    Some competition might do nVidia good--if Larrabee manages to outperform nvidia, you know nvidia will go berserk and release another hammer like the NV40 after R3x0 spanked them for a year.

    Maybe we'll start seeing those price/performance gains we've been spoiled with until ATI/AMD decided to stop being competitive.

    Overall, this can only mean good things, even if Larrabee itself ultimately fails.
  • Griswold - Monday, August 4, 2008 - link

    Wake-up call dumbo. AMD just started to mop the floor with nvidias products as far as price/performance goes.
  • watersb - Monday, August 4, 2008 - link

    great article!

    You compare the Larrabee to a Core 2 duo - for SIMD instructions, you multiplied by a (hypothetical) 10 cores to show Larrabee at 160 SIMD instructions per clock (IPC). But you show non-vector IPC as 2.

    For a 10-core Larrabee, shouldn't that be x10 as well? For 20 scalar IPC
  • Adamv1 - Monday, August 4, 2008 - link

    I know Intel has been working on Ray Tracing and I'm really curious how this is going to fit into the picture.

    From what i remember Ray Tracing is a highly parallel and scales quite well with more cores and they were talking about introducing it on 8 core processors, it seems to me this would be a great platform to try it on.
  • SuperGee - Thursday, August 7, 2008 - link

    How it fit's.
    GPU from ATI and nV are called HArdware renderers. Stil a lot of fixed funtion. Rops TMU blender rasterizer etc. And unified shader are on the evolution to get more general purpouse. But they aren't fully GP.
    This larrabee a exotic X86 massive multi core. Will act as just like a Multicore CPU. But optimised for GPU task and deployed as GPU.
    So iNTel use a Software renderer and wil first emulate DirectX/OpenGL on it with its drivers.
    Like nv ATI is more HAL with as backup HEL
    Where Larrabee is pure HEL. But it's parralel power wil boost Software method as it is just like a large bunch of X86 cores.
    HEL wil runs fast, as if it was 'HAL' with LArrabee. Because the software computing power for such task are avaible with it.

    What this means is that as a GFX engine developer you got full freedom if you going to use larrabee directly.

    Like they say first with a DirectX/openGL driver. Later with also a CPU driver where it can be easy target directly. thus like GPGPU task. but larrabee could pop up as extra cores in windows.
    This means, because whatever you do is like a software solution.
    You can make a software rendere on Ratracing method, but also a Voxel engine could be done to. But this software rendere will be accelerated bij the larrabee massive multicore CPU with could do GPU stuf also very good. But will boost any software renderer. Offcourse it must be full optimised for larrabee to get the most out of it. using those vector units and X86 larrabee extention.

    Novalogic could use this to, for there Voxel game engine back in the day's of PIII.

    It could accelerate any software renderer wich depend heavily on parralel computing.
  • icrf - Monday, August 4, 2008 - link

    Since I don't play many games anymore, that aspect of Larrabee doesn't interest me any more than making economies of scale so I can buy one cheap. I'm very interested in seeing how well something like POV-Ray or an H.264 encoder can be implemented, and what kind of speed increase it'd see. Sure, these things could be implemented on current GPUs through Cuda/CTM, but that's such an different kind of task, it's not at all quick or easy. If it's significantly simpler, we'd actually see software sooner that supports it.
  • cyberserf - Monday, August 4, 2008 - link

    one word: MATROX
  • Guuts - Monday, August 4, 2008 - link

    You're going to have to use more than one word, sorry... I have no idea what in this article has anything to do with Matrox.
  • phaxmohdem - Monday, August 4, 2008 - link

    What you mean you DON'T have a Parhelia card in your PC? WTF is wrong with you?
  • TonyB - Monday, August 4, 2008 - link

    but can it play crysis?!

Log in

Don't have an account? Sign up now