Telling a worthy manufacturer that they cannot compete in the gaming market is much like telling a nice guy that he simply can't play basketball. While sitting in a car with three other members of the ATI team, we were having a nice discussion about the present graphics card market. When one of the ATI representatives asked for our opinion on a higher clock speed Rage Fury Pro, possibly in the TNT2 Ultra range of speeds, we were taken by surprise. Here, for the first time since the true introduction of 3D accelerated gaming on the PC, we had ATI talking about assuming a leading role in the gaming market. Although it is true that just one year ago ATI had the potential to take the gaming market with the release of their Rage 128 chip, delays in the release of the part snatched the chances of that gold medal away from them quickly. This conversation took place just under six months ago, and as shocked as we were back then when ATI was talking about taking on NVIDIA, one of the leaders in the 3D accelerated PC gaming market, we were just as shocked when they dropped the news about project Aurora.

Project Aurora started out as an ambiguous page on ATI's site and shortly thereafter turned into a skeptical press release as the term Aurora was morphed into ATI's latest offering, the Rage Fury MAXX card. The Rage Fury MAXX revisited an idea that was first introduced to the gaming market with the advent of 3dfx's Voodoo2: the idea of putting two standalone graphics chipsets together in order to provide a desirable performance boost with minimal added engineering time.

The idea of using multiple processors to quickly achieve a performance boost without having to wait for the technology to improve is something that is presently all around the industry. 3dfx's Scan Line Interleave (SLI) on the Voodoo2 was a quick and easy way to assume a nice performance boost simply by adding on another graphics card. The SLI technology allowed the number of horizontal lines being rendered to be split evenly between the two cards in the configuration, so one card would handle every even line while the other card would handle every odd line. Because both cards worked on the same scene, the textures present in the scene had to be duplicated in the frame buffers of both cards being used. This was a highly inefficient manner of improving performance, but, then again, at the time of the technology, the 8/12MB of memory on a single Voodoo2 was more than enough for the games.

On the other hand, this manner of improving performance was very appealing to gamers because they could absorb the cost of owning a single Voodoo2 board, enjoy the performance, and when they came across a little more cash they could make the upgrade to a Voodoo2 SLI configuration and assume an immediate performance increase. The key to the success of 3dfx's Voodoo2 SLI was the fact that you never threw away your initial investment, something very rare in the graphics accelerator market.

The success of the SLI technology led to the question of whether or not 3dfx's Voodoo3 supported SLI. Another company, Metabyte, stepped forth with a technology that was unofficially dubbed SLI, yet, with a few modifications, it could be used on any card. Metabyte officially called this technology their Parallel Graphics Configuration (PGC). The PGC technology split up any given frame into two parts, with each card/chip handling one part of the screen. This approach required quite a bit of elegance in the actual drivers themselves as the drivers had to take into account factors like what would happen if the card rendering the top half (which is generally less complex than the bottom half) finished before the other card was done rendering the bottom half. At the same time, the end result would be much more efficient than 3dfx's SLI design because the textures did not have to be duplicated and the polygon throughput of the setup was effectively doubled, whereas it remained equal to that of a single card in the Voodoo2 SLI situation. Unfortunately, Metabyte's PGC never made it to market, an unfortunate reality as the expensive product could have been quite a success -- Can you imagine laughing at a GeForce's 480 Mpixel/s fill rate while running dual Voodoo3 3500's (732Mpixel/s) or dual TNT2 Ultras (600Mpixel/s)?

ATI turned project Aurora into their take on the same idea, and thus ATI's Alternate Frame Rendering (AFR) Technology was born. As the name implies, AFR divides the load between the two chips in the configuration by frames, instead of parts of frames. One chip will handle the current frame while the second chip is handling the next frame. ATI's AFR is the basis for the Rage Fury MAXX and future cards which will carry the MAXX name.

The Rage Fury MAXX was ATI's only chance at competing with what 3dfx, NVIDIA and S3 hoped to have released by the end of the 1999 holiday shopping season. ATI had no new chip that would allow them to compete with the big boys, all they had was the Rage 128 Pro that delivered performance somewhere between that of a TNT2 and a TNT2 Ultra for about the price of the latter. The Rage 128 Pro itself is a 0.25-micron chip clocked at 125MHz, resulting in a 250 Mpixel/s fill rate; put two of these together and you have a setup capable of beating NVIDIA's recently launched GeForce 256 (500 Mpixel/s versus 480 Mpixel/s). The Rage 128 Pro was featured on ATI's recently released ATI Rage Fury Pro, and the combination of two of these chips using ATI's AFR technology is a product known as the Rage Fury MAXX. With less than three weeks left in 1999, ATI will be pushing for the sale of the Rage Fury MAXX within the next 10 days, pitting it head to head with NVIDIA's GeForce that has been dominating the store shelves. Not only is ATI attempting to compete with NVIDIA on a performance level, but on the issue of price as well, as they have vowed to match the price of the GeForce with the Rage Fury MAXX. Bold claims from a company that isn't known to be a present competitor in the gaming community.

The Specs

Log in

Don't have an account? Sign up now