When the first real battle of 3D accelerators was fought there were many more contenders than just ATI and NVIDIA. In fact the search wasn't for a GeForce4 killer, but rather a card that could topple the mighty Voodoo2 by 3dfx. A video card review wouldn't be complete without a comparison between 3dfx, ATI, Matrox, NVIDIA and S3. Our first review of Matrox's Millennium G400MAX had a total of 12 cards from all of these manufacturers represented. Our last video card roundup had 9 cards, 6 of which were from NVIDIA; the remaining three from the ATI camp.

Darwinism left the market with two major competitors, which wasn't necessarily a bad thing. Performance has improved tremendously since the days of the Savage3D and G200 while image quality and features have reached new heights. But as we've seen time and time again, whenever there is a paradigm shift any market there is room for market share to be lost and gain. With the trend in GPUs lending towards more flexible and programmable cores (as we mentioned in our GeForce3 review, there's a trend towards making GPUs much more CPU-like) those companies that don't catch on will lose market share while the companies that can show leadership will undoubtedly gain.

Although NVIDIA has been shipping the world's first mass-market programmable GPU for close to two years now, the technology is still very infantile in nature. An incredible enablement platform for today's GPUs has been Microsoft's DirectX 8 which promised the ability to harness the power of these programmable "shader" units. While we still haven't seen many hard-hitting titles make use of the most attractive DX8 features, games are finally on the horizon. The demos and upcoming titles that make use of these pixel and vertex shaders show off a lot of the neat effects that can be accomplished when you're not dealing with a fixed function graphics pipeline but to 3DLabs just having a very flexible set of shaders isn't enough.

Why Programmable?

The name of the game is programmability; in the days before hardware 3D accelerators, developers used the host CPU to handle everything from the physics and AI to rendering the actual frames. The problem with this was that even the most powerful desktop CPUs aren't well suited to doing the type of intense work that's necessary to run these 3D games. They are very flexible in that you can program them to do just about any function you'd like, however they don't have the memory bandwidth nor the spare processing power to handle all of their "everyday" tasks as well as perform all the 3D rendering necessary at high frame rates.

The introduction of hardware 3D accelerators took the burden away from the host CPU by providing a dedicated processor with a good amount of dedicated memory bandwidth well suited for 3D rendering and nothing else. The problem with this approach was that the processor manufacturers defined what the programmers could do with their hardware. And while hardware engineers are great at implementing state machines and optimizing logic, they aren't the Tim Sweeneys or the John Carmacks of the world when it comes to defining what the next-generation of 3D games will need.

Then came today's generation of "programmable" (we'll explain the quotes in a bit) 3D accelerators. Architectures like the GeForce4 and Radeon 8500 give the developers quite a bit of freedom; the freedom to dictate what they want these very powerful GPUs to do. However the freedom isn't unlimited; the same freedom that currently exists for a developer writing code for an AMD Athlon XP or Intel Pentium 4 doesn't exist for a game developer creating a 3D engine. The flexibility that is offered by today's GPUs is a dream come true for the developers whose creativity has been limited by the fixed-function graphics pipelines of yesterday, but according to 3DLabs that's still not enough.

We've already mentioned why the host CPU, in all its infinitely programmable glory, cannot be used as the sole 3D accelerator in a system. But what if a similar, general purpose 3D processor can be introduced with the ability to be completely programmable much like a CPU? Imagine a processor with an incredible number of parallel execution units tailored specifically for the SIMD nature of 3D rendering calculations that also had the programmability of a desktop CPU. The power of this processor could be harnessed by a high-level programming language (much like C/C++, FORTRAN, Java, C#, etc… for desktop CPUs) and it would give game developers the utmost flexibility to be as creative as they could possibly imagine.

This is 3DLabs' vision (we'll discuss how realistic it is with this part later on) and honestly it's the direction that the 3D graphics market is headed in. The competition within the market won't continue if developers are forced to choose between supporting ATI's pixel shaders or NVIDIA's. When a software developer goes to write a program for x86 CPUs they don't decide to support an AMD version of x86 or an Intel version; there's one general instruction set while the hardware implementing the instruction set may differ from one CPU vendor to the next, that's generally transparent to the developer.

The quote that 3DLabs' points to in order to illustrate the problem with the current "programmable" GPUs is from an old plan update by John Carmack in reference to the GeForce3's Pixel Shaders:

"Other companies had potentially better approaches, but they are now forced to dumb them down to the level of the GF3 for the sake of compatibility. Hopefully we can still see some of the extra flexibility in OpenGL extensions."

Instead of "dumbing down" their approach to a programmable pixel pipeline to the level of the GeForce3, 3DLabs decided to create an even more programmable architecture that they claim is a superset of anything currently available.

Today's surprise announcement is that of 3DLabs' latest graphics architecture that will be used in everything from future professional cards to consumer/gaming products later this year. Even more of a surprise is the fact that production quality silicon is currently running at 3DLabs and it would be pessimistic to say that the first shipments of this technology won't occur within 2 months.

Let's get started…

VPU - It's Time to Learn a new Acronym
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now