Introduction

With our recent architecture and feature set coverage over the ATI Radeon X1000 series launch, we were a little bit light on the performance numbers. Rather than fill out some of the tests and update the article, we decided to take a little more time and do it up right. We have heard the call for more game tests, and today, we bring them on in spades. Our expanded tests include the games mentioned in our earlier coverage, the much requested Battlefield 2, and we illustrate the stellar way in which the new X1000 series handles enabling 4xAA on the games that we tested.

While scaling-with-aa on the new X1000 series is very good, will it be good enough to make up for the price difference with competitive NVIDIA hardware? Certainly, the feature set is of value with ATI offering the added benefit of MSAA on MRT and floating point surfaces, high quality AF, SM3.0, and Avivo. But performance is absolutely critical on current and near term games. Currently, many HDR methods avoid floating point output and MRTs in order to maintain compatibility with AA on current hardware. Until game developers shift to full floating point framebuffers or make heavy use of multiple render targets, ATI's added AA support won't make much difference to gamers. High quality anisotropic filtering is definitely something that we have begged of NVIDIA and ATI for a long time and we are glad to see it, but the benefits just aren't that visible in first-person shooters and the like. Shader Model 3.0 and Avivo are good things to have around as well; better API support, image quality, and video output are things that everyone wants.

However, the bottom line is that performance sells video cards. The first thing that people question when they are looking for a new card is just how well it runs in their favorite game. Hopefully, we will be able to shed some light on the issue here.

We will look at resolutions from 800x600 up to 2048x1536 and antialiasing tests will be included where possible. In games where we tested both with and without antialiasing, we will also show a graph of how performance drops due to AA scales with resolution. This data will be a lower-is-better graph (less drop in frame rate is a good thing) and will be shown scaled over resolution (as a performance drop will increase in percentage with resolution). The test system that we employed is the one used for our initial tests of the hardware.

Battlefield 2 Performance
Comments Locked

93 Comments

View All Comments

  • DonPMitchell - Tuesday, October 11, 2005 - link

    I wish they had benchmarked Half Life2, which is sort of a standard thing to look at. I don't disagree with their final conclusions, but I can't help but wonder if HL2 spanked nVidia, and that made them chose another game. It would be more unbiased for them to stick to a more or less standard set of tests, and it would let use compare to prior tests as well.
  • Larso - Monday, October 10, 2005 - link

    First I would like to thank the AT crew for another excellent article! The graphs are very intuitive and easy to read.

    I think that one aspect is missing in about every of the X1000 articles I have found on the net: how does this new family of cards compare with the previous generation(s)? - For example, what middlerange card should I pickup as an upgrade for 9600XT, and how much improvement will I get??

    Anybody stumbled over a review that compares these new cards with the previous generations??

    -

    Another question, I selected the 9600XT back then because it can easily be passively cooled, now I wonder if any of the new cards can be silenced without turning the case into a bake owen?
  • MrJim - Sunday, October 9, 2005 - link

    I dont understand why anandtech keeps on not testing games other than shooters with these high-end cards, like demanding flightsims(lock-on is very graphic intensive) and/or Pacific Fighters(not as GPU dependant as Lock-On). Also why no racing sims? We who play these do use filtering and FSAA alot or at least the 350 ones i know. Cheers!
  • dimitrio - Saturday, October 8, 2005 - link

    Some one said they like the way the graphs were done. I must say that I found it a little difficult to do comparisions, since you have to look back and forth to see what card each symbol represents. After you give up figuring that out, you try to look at the data below, but again, since it's not ordered "better to worse", it takes some time to figure out that data. Things got even more complicated with the "Lower is Better" graph.

    I aknowledge that it makes everything much cleaner, and with the number of benchmarks on this article, you would end up with dozens of graphs on several pages, and you can clearly sence the writers desire to improve the presentation with this new format, but sometimes things are better kept simple, and I still would like to see the many bar graphs, as they are much more intuitive and informative, to me at least.
  • photoguy99 - Saturday, October 8, 2005 - link

    This article is a prime reason - editors listen, participate, improve.

    Tom's doesn't even link their articles directly to discussions! Why? Can't handle the feedback?

    Glad to be here.
  • Spoelie - Saturday, October 8, 2005 - link

    Isn't anyone else confuzzled about the x1600xt lackluster showing? I was really hoping to make it my next upgrade but it's current performance is only value-card worth. Just looking at the specs (12 SM3.0 pixel pipes @ 600mhz) would have it creaming the 6600GT (8 SM3.0 pixel pipes @ 500mhz), but it's barely even competing with it. This without considering other architectural advances and faster memory! My guess is that the ultimate fillrate is determined by the TMU's, and only having 4 makes this card a worthy 9600xt followup -which had 4 TMU's @ 500mhz- but nowhere near the mainstream card of this day and age. Extremely bad decision on ATI's part if this is it. I can't think of any other reason for this card to perform so pathetic. It would be nice to have it clarified if I'm completely missing the issue tho.
  • taylore2003 - Saturday, October 8, 2005 - link

    what Anandtech really need to to is benchmark the x1300pro on a non fx-55 system, ppl who buy that gpu will not have a top of the line PC, do it on a 3200+ amd not a fx-55, i mean come on, then ppl like me (damn all 16 y/o's) can see what kinda framefates we would be getting, the x1300pro should go vs the 6600 non gt! b/c we can see te x1800 is a great top of the line but ati's mid to low range gpus are not so hot, so lets see, a x1300pro vs a 6600 non gt! w/ a reasonable test setup.
  • coldpower27 - Saturday, October 8, 2005 - link

    From what I can see, it seems the 4 TMU's is a very crippling feature when compared to the 6600 GT, as that has 8 TMU's but 4 ROP's. It beats the 6600 GT yes, but not as much as we would be expecting.

    Compared to the 6600 GT

    Pure Pixel Fillrate: X1600 XT 7.08 GP vs 6600 GT 4.0GP 77% More (ATI)
    Output Pixel Fillrate: X1600 XT (4 TMU's)2.36 GP vs (4 ROP's) 2.0 GP 18% More (ATI)
    Vertex Shader Fillrate: X1600 XT 737.5 MT's vs 6600 GT 375 MT's 96.6% More (ATI)
    Memory Bandwidth: X1600 XT 1380MHZ 22.08 GB/s vs 6600 GT 1000MHZ 16 GB/s 38% More (ATI)

    Add to that the 256MB vs 128MB comparison, and "more efficient Shader Model 3.0 implementation".

    Battlefield with AA.
    X1600 XT is ~ 50-60% Faster

    Day of Defeat Source with AA
    X1600 XT is ~ 33% Faster (Starting 12x9)

    Doom 3 wtih AA(OpenGL)
    X1600 XT is ~ 6600 GT

    FarCry with AA
    X1600 XT is ~ 37%-38% Faster (10x7 to 12x9)

    Chronicles of Riddick No AA (OpenGL)
    X1600 XT is ~ 6600 GT

    Splinter Cell Chaos Theory with AA&AF
    X1600 XT is ~ 12-18% Faster.

    Everquest No AA
    X1600 XT is 12-18% Faster.

    In some cases it's just faster up to the difference in their Output Fillrates, Battlefield 2, FarCry & DOD:Source being it's biggest Wins, & the two OpenGL games as it's poorest showing.

    Not much of this is a surprise however.
  • AtaStrumf - Saturday, October 8, 2005 - link

    I was wondering something along those lines too, especialy why is X1800 XT so much faster than X1800 XL, and why is X1300 Pro not that much slower than X1600 XT?

    I don't get this new ring bus memory controler. Maybe it has something to do with that as well as TMUs and co. In the past we had 256-bit on the high end, 128-bit in the middle and 64-bit on the low end, now it seems as though all have _the same_ memory controler, which seem a bit odd to me, and what is also peculiar is the fact that all but the highest end X1800 XT have 256 MB of memory, while X1800 XT has 512 MB. Does more memory now somehow equal more "bits" -- bandwith?
  • haelduksf - Friday, October 7, 2005 - link

    I'm guessing you're actually having a "peek" at DOD:S performance ;)

Log in

Don't have an account? Sign up now