NVIDIA's Back with NV35 - GeForceFX 5900 Ultraby Anand Lal Shimpi on May 12, 2003 8:53 AM EST
- Posted in
The Problem with Understanding Graphics
When NVIDIA introduced the GeForceFX, quite a ruckus ensued around the possibility that the NV30 did not have 8 distinct pixel rendering pipelines and in fact only had 4.
ATI quickly capitalized on the revelation and changed all of their marketing docs to point out the fact that their R3x0 GPUs had twice as many pipelines as NVIDIA's flagship. The community screamed foul play and NVIDIA was chastised, but the ironic part of it all was that the majority of the stones that were thrown were based on poor information.
In fact, the quality of information that has been coming from both ATI and NVIDIA in recent history has deteriorated significantly. Whereas companies like AMD and Intel are very forthcoming in the details of their microprocessor architectures, ATI and NVIDIA are very cryptic when they discuss their GPUs. The matter is further complicated by the introduction of marketing terms like "vertex engines" and referring to some parts of the GPU as a "pipeline" and others not when they both actually are "pipelines."
Now that GPUs are becoming much more like CPUs it is important that we understand the details of their architecture much like we do CPUs. You will find discussions in our forums revolving around the Pentium 4's 20 stage pipeline, but the closest parallel in the graphics discussions are about counting pixels.
We can understand why both ATI and NVIDIA are much less forthcoming with information than their counterparts in the CPU industry; remember that a new microarchitecture is introduced every five years in the CPU world, whereas the same occurs in the GPU world every 6 - 12 months. ATI and NVIDIA have to be very protective of their intellectual property as revealing too much could result in one of their innovations being found in a competitor's product 6 months down the road.
With that said, with this article we decided to dive in a little deeper into the GPU and begin drawing some parallels to what we know from our experience with CPUs. If you're not interested in learning how these GPUs work feel free to skip right ahead, otherwise grab some popcorn.