Scaling Down the X800

The basic architecture of the X700 has already been covered, as it is based on the X800's R423 core. The X700 takes it down a notch and runs with only 8 pixel pipelines at a maximum core clock speed of 475MHz on the XT version. The memory interface on the X700 series of cards is 128bit, giving it half the memory bandwidth (on a clock for clock basis) of its big brothers. Oddly, this time around there are two $200 versions of the card. The X700 XT (which we are testing today) has 475/1.05 core/mem clocks and 128MB of RAM. The X700 Pro will launch with 420/864 core/mem clocks and 256MB of RAM. We are definitely interested in testing the performance characteristics of the X700 Pro to determine if the lower clocks are able to take advantage of the larger memory to provide performance on par with its price, but for now we'll just have to settle for X700 XT numbers. ATI will also be producing an X700 card with 400MHz core and 128MB of 700MHz memory that will retail for $149.



In a very interesting twist, one of the features of R423 that didn't get scaled down was the vertex engine. The RV410 GPU retains all 6 vertex pipelines. This should give it an advantage over NVIDIA's solution in geometry heavy environments, especially where an under powered CPU is involved. NVIDIA only opted to keep 3 of NV40's 6 vertex pipes. Of course, NVIDIA's solution is also capable of VS3.0 (Vertex Shader 3.0) functionality which should help out if developers take advantage of those features. Vertex performance between these midrange cards will have to be an ongoing battle as more and more features are supported and drivers mature. We will also have to come back and examine the cards on an underpowered CPU to see if the potential advantage ATI has in a wider vertex engine manifests itself in gameplay on a midrange priced system.

In addition to scaling down a few key features of the X800, ATI physically scaled down the RV410 by fabbing it on TSMC's 110nm process. This is the same process that is used in the X300 series. This isn't a surprising move, as ATI has been introducing new fab processes on cheaper parts in order to sort out any wrinkles before a brand new architecture is pushed out. NVIDIA also went with the 110nm process on this round.

Interestingly, ATI is pushing it's 110million transistors at a 25MHz lower core clock than NV43. This does give NVIDIA a fillrate advantage, though pure fillrate means very little these days. At first, we were very surprised that ATI didn't push their GPU faster. After all, RV410 is basically a cut down R423 with a die shrink. The X800 XT runs at the same core speed as the 6600 GT (500MHz). If NVIDIA could increase its clock speed from the 6800 series to the narrower and smaller 6600 series, why couldn't ATI do the same when moving to X700 from X800?

In asking ATI about this, we got quite a few answers, the combination of which generally satisfied us. First, ATI's 130nm process is a low-k process. This means that there is less capacitance between metal layers on the die, which gives ATI's GPUs added stability at higher core speeds. Shrinking to a 110nm process and losing the low-k advantage makes this different than a simple die shrink. Also, ATI's X800 series of cards are supplied with external power, while the X700 relies only on the PCIe connection for juice. This could easily explain why the X700 XT is clocked lower than the X800 XT, and the extra vertex processing power could help explain why NVIDIA was able to hit a higher clock speed than ATI. We could be wrong, but without knowing both companies' yield rates and power consumption we really can't say for sure why things ended up this way.

The X700 retains the video processing features of the X800 as well, and we are starting to see both ATI and NVIDIA gearing up for some exciting things in video. Additionally, ATI has stated that their X700 is capable of running stable passively cooled in a BTX system (albeit with a massive passive heatsink). This, of course, is something the X800 cannot do. ATI has said that initial BTX versions of their cards will be OEM only as that will be where BTX makes its first mark.

Now that we know a little more about the X700, lets take a look at the numbers.

Index The Test
Comments Locked

40 Comments

View All Comments

  • PrinceGaz - Wednesday, September 22, 2004 - link

    #27 The Plagrimaster:

    Do you know what a paragraph is?

    Anyway, 1280x960 or 1280x1024 is becoming the more common resolution used by many people with fairly recent systems, even if its only because 1280x1024 is the native resolution of their LCD display, so anything else looks inferior.

    How fast someone's CPU is really only determines the maximum framerate that can be achieved in any given game sequence regardless of resolution. The CPU itself won't churn out frames more quickly just because the graphics-card is rendering at a lower resolution. That answers the first half or so of your post.

    As the X700 series are upper mid-range cards, they are intended to be used at quite high resolutions, not 1024x768 or less. The tests showed the X700XT was easily capable of producing a more than satisfactory framerate at 1280x1024 in every game tried including Doom 3, so why run more tests at 1024x768? Only if it were a slower which could only manage 30-40fps or less at 1280x960 would tests at lower resolutions be worthwhile.
  • kmmatney - Wednesday, September 22, 2004 - link

    Since these are now "low-end" cards, it would be great to see how they perform with slower cpus. I still have a lowly XP 2400+ thoroughbred...and I'd rather spend money on my Video card than another MB/CPU, if it can perform (at 1024 x 768).
  • Chuckles - Wednesday, September 22, 2004 - link

    I don't know about you, #27, but I think 10x7 is tunnel vision. Decent sized monitors are not all that expensive, and they allow you to do so much more with the space.
  • ThePlagiarmaster - Wednesday, September 22, 2004 - link

    What I want to know is how everything performs at 1024x768 with and without 4xaa/8xan. Lets face it 95% of the people running these games are NOT doing it in anything higher. To cut this res out of everything but doom3 (an oddball engine to begin with) is ridiculous. Sure higher shows us bandwidth becomes a big issue. But for most people running at 1024x768 (where most of us have cpu's that can keep a decent fps), does bandwidth really matter at all? Is a 9700pro still good at this res? You have to test 1024x768, because all you're doing here is showing one side of the coin. People who have the FASTEST FREAKING CPU's (eh, most don't - raise your hand if you have an athlonFX 53 or A64 3400+ or better? - Or even a P4 of 3.4ghz or faster? - I suspect most hands are DOWN now), to go with the fastest GPU's. Most people cut one or the other. So you need to show how a card does at a "NORMAL" res. I usually can't even tell the difference between 1024x768 and 1600x1200. At the frenetic pace you get in a FPS you don't even see the little details. Most of us don't hop around in different resolutions for every different game either. Most of my customers couldn't even tell you what resolution IS! No, I'm not kidding. They take it home in the res I put it in and leave it there forever (1024x768). If you're like me you pick a res all games run in without tanking the fps. Which for me is 1024x768. I don't have to care what game I run, I just run it. No drops during heated action. I hope you re-bench with the res most people use so people can really see, is it worth the money or not at the res the world uses? Why pay $200-400 for a new card if the 9700pro still rocks at 1024x768, and that expensive card only gets you another couple fps this low. I know it gets tons better with much higher res's but at the normal persons res does it show its value or not? In doom it seems to matter, but then this game is a graphical demo. No other engine is quite this punishing on cards. A good 70% or so of my customers still buy 17inchers! Granted some games have multi-res interfaces, but some get really small at larger resolutions on a 17in. This article is the complete opposite of running cpu tests in 640x480 but yeilds the same results. If nobody runs at 640x480 how real-world is it? If "almost" nobody runs in 1600x1200 should we spend more time looking at 1024x768 where 90% or so run? That's more real world right? 1600x1200 is for the fastest machines on the planet. Which is NOT many people I know, and I sell pc's...LOL.
  • AtaStrumf - Wednesday, September 22, 2004 - link

    At the very least have a look at Far Cry and Halo results. They realy seem to be upside down.

    I don't know who's making the mistake here, but it's something that needs looking into.
  • AtaStrumf - Wednesday, September 22, 2004 - link

    Derek I think your GPU scores urgently need updateing. We need to be able to compare new cards to old ones and we just can't do that reliably right now. Have a look at xbitlabs test results.

    http://www.xbitlabs.com/articles/video/display/ati...

    Relative positions between 9800 XT and X700 XT are more often then not different from your results.

    In their results it seems like R9800 XT fares much better relative to X700 XT. We might be making the wrong conclusions based on your scores.
  • Da3dalus - Tuesday, September 21, 2004 - link

    Quite clearly a win for nVidia in this match :)

    Hey Derek, are you gonna do a big Fall 2004 Video Card Roundup like you did last year? That would be really nice :)
  • jm0ris0n - Tuesday, September 21, 2004 - link

    #17 My thoughts exactly ! :)
  • DerekWilson - Tuesday, September 21, 2004 - link

    #8:

    ATI has stated that they will be bridging the RV410 back to AGP from PCIe -- they will not be running seperate silicon. They didn't have any kind of date they could get us, but they did indicate that it should be available before the end of the year. It's just hard to trust having such distant dates thrown around when both ATI and NVIDIA have shown that they have problems filling the channel with currently announced products.


    #18:

    This is likely a result of the fact that only the X700 XT, 6600 GT, and X600 XT were run with the most recent drivers -- the 6800 series cards are still running on 61.xx while the 6600 GT was powered by the 65.xx drivers. We are looking into a driver regression test, and we will take another look at performance with the latest drivers as the dust starts to settle.
  • Aquila76 - Tuesday, September 21, 2004 - link

    OK, I phrased the first part of my post VERY badly. In my defense, I had not yet had any coffee. ;)
    What I was trying to get across was that ATI does OK competing with NVidia in DX games, but still gets killed in OpenGL. They used to smoke NVidia in DX, but now NVidia has fixed whatever issues they had with that and are making a very good competitive card to ATI's offering. The 6600GT is clearly the better card here, for either D3 or HL2 engines.

Log in

Don't have an account? Sign up now