Hardware Features and Test Setup

We're talking about features and tests today because we are going to be trying something a bit different this time around. In addition to our standard noAA/4xAA tests (both of which always have 8xAF enabled), we are including a performance test at maximal image quality on each architecture. This won't give us directly comparable numbers in terms of performance, but it will give us an idea of playability at maximum quality.

These days, we are running out of ways to push our performance tests. Plenty of games out there are CPU limited, and for what purpose is a card as powerful as an X1900XTX or 7800 GTX 512 purchased except to be pushed to its limit and beyond? Certainly, a very interesting route to go would be for us to purchase a few apple cinema displays and possibly an old IBM T221 and go insane with resolution. And maybe we will at some point. But for now, most people don't have 30" displays (though the increasing power of today's graphics cards is certainly a compelling argument for such an investment). For now, people can push their high end cards by enabling insane features and getting the absolute maximum eye candy possible out of all their games. Flight and space sim nuts now have angle independent anisotropic filtering on ATI hardware, adaptive antialiasing for textured surfaces helps in games with lots of fences and wires and tiny detail work, and 6xAA combined with 16xAF means you'll almost never have to look at a blurry texture with jagged edges again. It all comes at a price, or course, but is it worth it?

In our max quality tests, we will compare ATI parts with 16xAF, 6xAA, adaptive AA, high quality AF and as little catalyst AI as possible enabled to NVIDIA parts with 16xAF, 4x or 8xS AA (depending on reasonable support in the application), transparency AA, and no optimizations (high quality) enabled. In all cases, ATI will have the image quality advantage with angle independent AF and 6x MSAA. Some games with in game AA settings didn't have an option for 8xAA and didn't play well when we forced it in the driver, so we opted to go with the highest in game AA setting most of the time (which is reflected by the highest MSAA level supported in hardware - again most of the time). We tend to like NVIDIA's transparency SSAA a little better than ATI's adaptive AA, but that may just come down to opinion and it still doesn't make up for the quality advantages the X1900 holds over the 7800 GTX lineup.

Our standard tests should look pretty familiar, and here is all the test hardware we used. Multiple systems were required in order to test both CrossFire and SLI, but all single card tests were performed in the ATI reference RD480 board.

ATI Radeon Express 200 based system
NVIDIA nForce 4 based system
AMD Athlon 64 FX-57
2x 1GB DDR400 2:3:2:8
120 GB Seagate 7200.7 HD
600 W OCZ PowerStream PSU

First up is our apples to apples testing with NVIDIA and ATI setup to produce comparable image quality with 8xAF and either no AA or 4xAA. The resolutions we will look at are 1280x960 (or 1024) through 2048x1536.

Not Quite Ready: The Ultimate Gamer Platform, RD580 The Performance Breakdown
Comments Locked

120 Comments

View All Comments

  • DerekWilson - Tuesday, January 24, 2006 - link

    this is where things get a little fuzzy ... when we used to refer to an architecture as being -- for instance -- 16x1 or 8x2, we refered to the pixel shaders ability to texture a pixel. Thus, when an application wanted to perform multitexturing, the hardware would perform about the same -- single pass graphics cut the performance of the 8x2 architecture in half because half the texturing poewr was ... this was much more important for early dx, fixed pipe, or opengl based games. DX9 through all that out the window, as it is now common to see many instructions and cycles spent on any given pixel.

    in a way, since there are only 16 texture units you might be able to say its something like 48x0.333 ... it really isn't possible to texture all 48 pixels every clock cycle ad infinitum. in an 8x2 architecture you really could texture each of 8 pixels with 2 textures every clock cycle forever.

    to put it more plainly, we are now doing much more actual work with the textures we load, so the focus has shifted from "texturing" a pixel to "shading" a pixel ... or fragment ... or whatever you wanna call it.

    it's entirely different then xenos as xenos uses a unified shader architecture.

    interestingly though, R580 supports a render to vertex buffer feature that allows you to turn your pixel shaders into vertex processors and spit the output straight back into the incoming vertex data.

    but i digress ....

  • aschwabe - Tuesday, January 24, 2006 - link

    I'm wondering how a dual 7800GT/7800GTX stacked up against this card.

    i.e. Is the brand new system I bought literally 24 hours ago going to be able to compete?
  • Live - Tuesday, January 24, 2006 - link

    SLI figures is all over the review. Go read and look at the graphs again.
  • aschwabe - Tuesday, January 24, 2006 - link

    Ah, my bad, thanks.
  • DigitalFreak - Tuesday, January 24, 2006 - link

    Go check out the review on hardocp.com. They have benchies for both the GTX 256 & GTX 512, SLI & non SLI.
  • Live - Tuesday, January 24, 2006 - link

    No my bad. I'm a bit slow. Only the GTX 512 SLI are in there. sorry!
  • Viper4185 - Tuesday, January 24, 2006 - link

    Just a few comments (some are being very picky I know)

    1) Why are you using the latest hardware with and old Seagate 7200.7 drive when the 7200.9 series is available? Also no FX-60?

    2) Disappointing to see no power consumption/noise levels in your testing...

    3) You are like the first site to show Crossfire XTX benchmarks? I am very confused... I thought there was only a XT Crossfire card so how do you get Crossfire XTX benchmarks?

    Otherwise good job :)
  • DerekWilson - Tuesday, January 24, 2006 - link

    crossfire xtx indicates that we ran a 1900 crossfire edition card in conjunction with a 1900 xtx .... this is as opposed to running the crossfire edition card in conjunction with a 1900 xt.

    crossfire does not synchronize GPU speed, so performance will be (slightly) better when pairing the faster card with the crossfire.

    fx-60 is slower than fx-57 for single threaded apps

    power consumption was supposed to be included, but we have had some power issues. We will be updating the article as soon as we can -- we didn't want to hold the entire piece in order to wait for power.

    harddrive performance is not going to affect anything but load times in our benchmarks.
  • DigitalFreak - Tuesday, January 24, 2006 - link

    See my comment above. They are probably running an XTX card with the Crossfire Edition master card.
  • OrSin - Tuesday, January 24, 2006 - link

    Are gamers going insane. $500+ for video card is not a good price. Maybe its jsut me but are bragging rights really worth thats kind of money. Even if you played a game thats needs it you should be pissed at the game company thats puts a blot mess thats needs a $500 card.

Log in

Don't have an account? Sign up now