Recently we received confirmation that the first retail samples of ATI's R420 (AGP Radeon X800) will debut April 26 as Radeon X800 Pro. ATI's naming scheme for R420 has been closely guarded, but the open term we hear from vendors is "Radeon X800."

What seems highly unusual is the scheduled introduction of Radeon X800 XT on May 31st; only a month after Radeon X800 Pro's unveiling. Recall that Radeon 9800 and 9800 XT were launched six months apart. We can speculate ATI has either changed their marketing strategy, or the difference in performance between R420 and NV40 hastens ATI's release schedule. Further inspection of the ATI roadmaps reveals that "Non-Pro" Radeon X800s are absent. Perhaps "XT" has replaced the "Pro" Radeon series, and "Pro" thus replaced the "Non-Pro" Radeon series. Even though the initial launches of Radeon X800 will use 256MB of GDDR3, before the end of the year we also anticipate a 512MB revision. Furthermore, we will almost certainly see Radeon X800 SE with 128MB of DDR1; which will also debut with much a lower clockspeed.

R423, the PCI-Express (PCX) version of R420, is scheduled to launch June 14th. Specifications on R423 are identical to R420, and the cards will also carry the Radeon X800 product name.

RV380 and RV370 will also receive new product names, as Radeon X600 and Radeon X300, respectively. For more details about R420, RV380 and RV370, please take a look at our previous ATI roadmaps here. Stay tuned for more ATI and NVIDA news from the trenches.

Update April 9, 2004: We just received confirmation that X800 Pro will run on 12 pipelines, Radeon X800 XT will run on 16 pipelines and Radeon X800 SE will run on 8. It is important to note that all three of these chips are based on the same R420/R423 core. ATI could have an overclocker'ss/softmodder s dream with the X800 Pro and SE derivaties! This also comes as somewhat of a surprise since original leaked ATI documents claimed R420 would utilize 8 "Extreme" pipelines.

Comments Locked

53 Comments

View All Comments

  • spite - Tuesday, April 6, 2004 - link

    You have to be kidding me. Complaining about the naming schemes? Read more closely. The NVidia names are getting shorter, while the ATI names are staying the same. GeForce 6800, Radeon X800. WHAT COULD BE SIMPLER? No more FX, which wasnt any more complicated than the old GeForce 4 etc system, but seemed to get people so upset. Maybe you want names like "Bob" or "Martha". Or maybe just "Q". Find another bitching bandwagon to jump on.
  • Da3dalus - Tuesday, April 6, 2004 - link

    /me awaits the benchmarking frenzy...

    :D
  • SubKamran - Tuesday, April 6, 2004 - link

    EXCELLENT. If they come out quick, it'll push down the 9600XT for my brother! LESS MONEY TO SPEND!

    Then next year I'll by the X series... :P
  • aw - Tuesday, April 6, 2004 - link

    Nvidia and ATI's marketing departments must really suck. The should all be fired! The naming schemes are bordering on ridiculous. No scratch that..they are ridiculous. It serves no purpose but to create confusion. Finally someone at Intel realized how stupid their naming scheme was and they are simplifying it. One can only hope that Nvidia and ATI will get a clue and follow their lead.

    Judging by these names, they aren't "ExTReMe" enough. I am going to wait for the

    Nvidia InFiNItY FX(XX) GeForce Triple eXXXtreme NV4658700000 with PCI XXXpress and 1,000 MB of 4x DDR2 Cas2 Memory...;-)

    Plus hopefully they will get a clue that most people aren't going to spend more than it costs to build a complete computer on a video card (unless they throw in a 19" LCD with it).

    Usally Anandtech comments on stupid naming schemes. This time they didn't. To bad...

    Nvidia and ATI...Keep it Simple Stupid! It always works
  • dgrady76 - Tuesday, April 6, 2004 - link

    I have a feeling that Doom 3 and Half Life 2 will have NO problem whatsoever taking advantage of extra memory and horsepower. Anti aliasing, Aniso, real-time lighting, huge textures, etc....

    Revolutionary steps CAN happen overnight- why it seems just yesterday I had to pick up my jaw off the floor after witnessing GLQuake for the first time. I definitley didn't see that one coming.

    I'll never pay more than $250 for a video card, but anyone who questions bigger, better, faster, more when it comes to computers is always proved wrong in the end.
  • PrinceGaz - Tuesday, April 6, 2004 - link

    Its nice to have some names to throw around, can't wait for the benchmarks next week :)

    Does the NDA for the Radeon X800 expire the same day as the GeForce 6800 so we get all the results in one big article, or will we have to wait a little longer for it?
  • Marsumane - Tuesday, April 6, 2004 - link

    I might be wrong on my numbers, but from what I remember reading ATI's solution allows data to transfer at 4GB/s in BOTH directions, allowing for 8GB/s potential max for data to flow to and from the video card. Nvidia's is equal to agp 16x - latenancies for the addition of the bridge. This would allow 4GB/s in only one direction - time for the additional lateneancy. Seeing how graphic cards' data usually travels by majority from the graphics card, and doesn't even use that much bandwidth to send the data anyways, Nvidia's solution should yield almost the same results. That is, considering that the latenancies are closer to marginal then higher. This should save Nvidia a buttload of money. Personally, I would have bridged the PCIX version to work w/ agp, and not vice versa due to the PCIX card most likely being the one that will bench the highest of the two and have a better shot at the top spot.

    And i dont see the "X" as being rediculous in this case, due to it meaning "10" and not "Extreme" or "Xtreme". I just want to know what they will do when they hit "11". XI800!? lol
  • ViRGE - Tuesday, April 6, 2004 - link

    #12, in PCI-E mode, the NV40 will have just as much bandwidth as PCI-E 16x has(thanks to an overclocked internal AGP bus); the difference is that Nvidia's solution is half-duplex, where as a full PCI-E solution is full duplex. You are right however in that neither is significant at this point; besides a few fringe apps, nothing is close to overwhelming 8x.
  • Cat - Tuesday, April 6, 2004 - link

    Huge amounts of video memory allow for much higher resolution shadow maps, for one thing. These are easily created on the fly, rather than taking artists' time as normal textures do. I can see the immediate impact of having more video memory there, since aliased shadow maps are a good 'suspension of disbelief'-breaker.

    Also remember that not just textures are stored on the card, but geometry as well.
  • Baldurga - Tuesday, April 6, 2004 - link

    Icewind, wait a until April 13th and we will know! ;-)

Log in

Don't have an account? Sign up now