Scaling Down the X800

The basic architecture of the X700 has already been covered, as it is based on the X800's R423 core. The X700 takes it down a notch and runs with only 8 pixel pipelines at a maximum core clock speed of 475MHz on the XT version. The memory interface on the X700 series of cards is 128bit, giving it half the memory bandwidth (on a clock for clock basis) of its big brothers. Oddly, this time around there are two $200 versions of the card. The X700 XT (which we are testing today) has 475/1.05 core/mem clocks and 128MB of RAM. The X700 Pro will launch with 420/864 core/mem clocks and 256MB of RAM. We are definitely interested in testing the performance characteristics of the X700 Pro to determine if the lower clocks are able to take advantage of the larger memory to provide performance on par with its price, but for now we'll just have to settle for X700 XT numbers. ATI will also be producing an X700 card with 400MHz core and 128MB of 700MHz memory that will retail for $149.



In a very interesting twist, one of the features of R423 that didn't get scaled down was the vertex engine. The RV410 GPU retains all 6 vertex pipelines. This should give it an advantage over NVIDIA's solution in geometry heavy environments, especially where an under powered CPU is involved. NVIDIA only opted to keep 3 of NV40's 6 vertex pipes. Of course, NVIDIA's solution is also capable of VS3.0 (Vertex Shader 3.0) functionality which should help out if developers take advantage of those features. Vertex performance between these midrange cards will have to be an ongoing battle as more and more features are supported and drivers mature. We will also have to come back and examine the cards on an underpowered CPU to see if the potential advantage ATI has in a wider vertex engine manifests itself in gameplay on a midrange priced system.

In addition to scaling down a few key features of the X800, ATI physically scaled down the RV410 by fabbing it on TSMC's 110nm process. This is the same process that is used in the X300 series. This isn't a surprising move, as ATI has been introducing new fab processes on cheaper parts in order to sort out any wrinkles before a brand new architecture is pushed out. NVIDIA also went with the 110nm process on this round.

Interestingly, ATI is pushing it's 110million transistors at a 25MHz lower core clock than NV43. This does give NVIDIA a fillrate advantage, though pure fillrate means very little these days. At first, we were very surprised that ATI didn't push their GPU faster. After all, RV410 is basically a cut down R423 with a die shrink. The X800 XT runs at the same core speed as the 6600 GT (500MHz). If NVIDIA could increase its clock speed from the 6800 series to the narrower and smaller 6600 series, why couldn't ATI do the same when moving to X700 from X800?

In asking ATI about this, we got quite a few answers, the combination of which generally satisfied us. First, ATI's 130nm process is a low-k process. This means that there is less capacitance between metal layers on the die, which gives ATI's GPUs added stability at higher core speeds. Shrinking to a 110nm process and losing the low-k advantage makes this different than a simple die shrink. Also, ATI's X800 series of cards are supplied with external power, while the X700 relies only on the PCIe connection for juice. This could easily explain why the X700 XT is clocked lower than the X800 XT, and the extra vertex processing power could help explain why NVIDIA was able to hit a higher clock speed than ATI. We could be wrong, but without knowing both companies' yield rates and power consumption we really can't say for sure why things ended up this way.

The X700 retains the video processing features of the X800 as well, and we are starting to see both ATI and NVIDIA gearing up for some exciting things in video. Additionally, ATI has stated that their X700 is capable of running stable passively cooled in a BTX system (albeit with a massive passive heatsink). This, of course, is something the X800 cannot do. ATI has said that initial BTX versions of their cards will be OEM only as that will be where BTX makes its first mark.

Now that we know a little more about the X700, lets take a look at the numbers.

Index The Test
Comments Locked

40 Comments

View All Comments

  • Entropy531 - Tuesday, September 21, 2004 - link

    Didn't the article say the pro (256mb) was the same price as the XT (128mb)? It does seem odd that the 6600s are only pci-e. Especially since nVidia only makes motherboards with AGP slots, right?
  • Drayvn - Tuesday, September 21, 2004 - link

    However, on this site http://www.hothardware.com/viewarticle.cfm?article... it shows the X700XT edged out a win overall.

    What i think is ATi are doing what nVidia did in the high end market, they brought out the X700Pro, which is very close to the X700XT, but cheaper, and probably highly moddable.

    Buy a X700Pro with 5 - 10% loss of performance for $60 less?
  • blckgrffn - Tuesday, September 21, 2004 - link

    What mystifies me (still) is the performance discrepancy between the 6800 and 6600 GT. In some cases, the 6600 GT is whooping up on it. The 6600GT preview article made some allusions to 12 pipes not be as effeicient as 8 and 16, etc. But if the performance is really so close between them, the 6800 is probably going to go the way of the 9500 Pro. That's too bad, my 6800 clocked at 400/825 is pretty nice. If anyone could clear up why the 6600 GT is faster than the 6800, that would be nice. The fill rates should be nearly identical, I guess. But doesn't the 6800 retain it's 6 vertex shaders and wouldn't the extra memory bandwidth make a noticeable difference?
  • Resh - Tuesday, September 21, 2004 - link

    Just wish nVidia would come out with the NF4 NOW with PCI-Express, etc. a board with two 16x slots, one 6600Gt now, and one later is looking pretty awesome.
  • rf - Tuesday, September 21, 2004 - link

    Looks like ATI dropped the ball - 12 months or more kicking nVidias ass and now they are the ones lagging behind.

    Oh well, I am not in the market for a graphics card at the moment (bought a 9800XT last year) but if I was, I'd be switching to nVidia.

    I do have to say that the move away from AGP is annoying. What about the people that want to upgrade their components? Are we supposed to ditch kit that is less than 6 months old?
  • ZobarStyl - Tuesday, September 21, 2004 - link

    I must agree all things considered the 6600GT really comes out the winner...I mean, look at the x800/6800 launch, the x800Pro looked like it just massacred the 6800GT, and now no one thinks twice at the 400$ price point which is better because nV put out some massive driver increases. Considering the 6600GT already has the performance AND feature advantage over the x700, there's just no contest when you add in what the nV driver team is going to do for its perf. Can't wait to dual up two 6600GT's (not SLI, multimonitor =) )
  • LocutusX - Tuesday, September 21, 2004 - link

    Just to be clear, I think #3's statement was invalid simply because Nvidia is winning half the Direct3D games as well as all the OGL games.
  • LocutusX - Tuesday, September 21, 2004 - link

    #3: "Again we see ATI=DX, nVidia=OpenGL. "

    Nah, don't think so. Here are the notes I took while I read the article;

    6600gt

    d3 (big win) - OGL
    far cry (with max AA/AF) - DX9
    halo - DX9
    jedi academy (big win) - OGL
    UT (tie) - DX8/DX9

    x700xt

    far cry (with NO aa/af) - DX9
    source engine (small win) - DX9
    UT (tie) - DX8/DX9

    I'm sorry to say it, but the X700XT is a disapointment. I'm not an "nvidiot"; check my forum profile, I'm an ATI owner.
  • Shinei - Tuesday, September 21, 2004 - link

    #11: Probably because you won't have much money left for a video card after you buy all the new crap you need for a Prescott system. ;)

    Anyway, this quote made me wonder a bit.
    "From this test, it looks like the X700 is the better card for source based games unless you want to run at really high quality settings."
    Er, if I can get great graphics at a decent framerate (42fps is pretty good for 16x12 with AA/AF, if you ask me (beats the hell out of Halo's horribly designed engine)), why WOULDN'T I turn on all the goodies? Then again, I used to enable AA/AF with my Ti4200 too, so my opinion may be slightly biased. ;)
  • Woodchuck2000 - Tuesday, September 21, 2004 - link

    #10 - I agree entirely! These are midrange cards. Yet they're released first as PCIe parts. Which is only available as a high-end Intel solution. Why does this make sense?

Log in

Don't have an account? Sign up now