The GPU

The GeForce processor, or Graphics Processing Unit (GPU) as coined by NVIDIA, did provide much needed enhancements on an aging TNT product run. In addition to providing a fill rate of 480 Million Pixels per Second on a .22 micron silicon chip, the GeForce introduced new technology to allow for further 3D game enhancement. The most highly boasted of which, the built in Transform and Lighting, remains to be seen to any great extent in current games. In theory, what T&L technology allows for is a higher polygon count in games due to polygon processing ability of the GeForce processor. Normally, under heavy gaming, the CPU processes the polygons in a scene. By including T&L into the GeForce processor, the video card can do some of the polygon processing and thus free up valuable CPU speed for use in enhanced game play. This way, total polygon count in a specific scene can be increased and realism will be enhanced. In addition, programmers can use the speed freed up from the CPU to included more CPU intensive instructions, such as artificial intelligence processing.

The fact of the matter is that we are yet to see any form of T&L implemented to a great extent in games. Not only will new games have to be written to take advantage of these high polygon counts that the GeForce can handle, but there is also the problem of playing the games on video cards without T&L. By playing a game optimized for T&L on a computer without a video card capable to doing it would result in a very slow game due to the fact that the CPU would be stuck processing all the complex polygons that the T&L hardware should be doing. As a solution to this, some game producers are considering selling two versions of games, one for computers with T&L and another for computers without it. Until games like this come along, all that remains to be seen of the GeForce's T&L is faster frame rates in games with moderately high polygon counts, such as Quake III Arena.

As stated before, NVIDIA likes to release processors in sets of two. One processor usually boasts a lower price with lower performance, while the other costs more but obtains faster speeds. NVIDIA followed this marketing plan (called this because it allows NVIDIA to capture not only high end but also middle range systems) with the release of the GeForce chip. In this case, the GeForce SDR (single data rate) was targeted at those not needing the fastest card available and the GeForce DDR (double data rate) was designed for the highest end systems.

In the case of the TNT-2 and the TNT-2 Ultra, the faster card possessed a higher quality core that allowed for higher clock speeds in the more expensive Ultra version. Unlike the TNT-2 and the TNT-2 Ultra processor distinctions, the GeForce SDR and GeForce DDR processors do not vary at all. Both have the same transistor count and run at the same speeds. The difference between the GeForce SDR and DDR lies in the way that the memory works.

Index SDR versus DDR
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now