Introduction

Take all the clichés used to describe a long overdue event or the unexpected fulfillment of a promise (hot places freezing, heavy animals soaring through the air, etc...) and you still couldn't say enough to fully proclaim the news that ATI has finally properly hard launched a product. That's right, looking around the internet this morning has provided us with the joyous realization that the Radeon X1900XT, XTX, and CrossFire parts are available for purchase. We've tried to keep an eye on the situation and it's been quite easy to see that ATI would be able to pull it off this time. Some sites started taking preorders earlier in the week saying their X1900 parts would ship in one to two days, putting the timeframe right on the mark. There were no missing dongles, no problems with customs, and ATI told us last week that thousands of parts had already been delivered to manufacturers.

And if that isn't enough to dance about, ATI has delivered a hugely powerful part with this launch. The Radeon X1900 series is no joke, and every card featuring the name is a behemoth. With triple the pixel shader units of the X1800 XT, and a general increase in supporting hardware throughout the pixel processing engine, ATI's hugely clocked 384 Million transistor GPU is capable of crunching enormous volumes of data very quickly. Fill rate isn't increased very much because the X1900 series still only allows 16 pixels to be drawn to the screen per clock cycle, but power is delivered where it is needed most. With longer and more complex shader programs, pixels need to stay in the shader engine longer which further shifts the performance burden from the theoretical maximum fill rate.

NVIDIA would like us to compare the X1900's increase in ALU (arithmetic logic unit) power to what they did with the FX 5900 after NV30 tanked. Certainly, increasing the math power (and increasing memory bandwidth) helped NVIDIA, but fortunately for ATI the X1900 is not derived from a fundamentally flawed GPU design. The X1800 series are certainly not bad parts, even if they are being completely replaced by the X1900 in ATI's lineup.



I'll spoil the results and make it clear that the X1900XT and XTX are hands down the best cards out there right now. But all positives aside, ATI needed this card to hard launch with good availability, perform better than anything else, and look good doing it. There have been too many speed bumps in ATI's way for there to be any room for a slip up on this launch, and it looks like they've pulled it off. The launch of the X1900 series not only puts ATI back on top, but (much more importantly) it puts them back in the game. Let's hope that both ATI and NVIDIA can keep up the good fight.

But let's not forget why we're here. The first thing we are going to do is talk about what makes the R580 GPU that powers the X1900 series so incredibly good at what it does.

R580 Architecture
Comments Locked

120 Comments

View All Comments

  • photoguy99 - Tuesday, January 24, 2006 - link

    Why do the editors keep implying the power of cards is "getting ahead" of games when it's actually not even close?

    - 1600x1200 monitors are pretty affordable
    - 8xAA does look better than 4xAA
    - It's nice play games with a minimum frame rate of 50-60

    Yes these are high end desires, but the X1900XT can't even meet these needs despite it's great power.

    Let's face it - the power of cards could double tomorrow and still be put to good use.
  • mi1stormilst - Tuesday, January 24, 2006 - link

    Well said well said my friend...

    We need to stop being so impressed by so very little. When games look like REAL LIFE does with lots of colors, shading, no jagged edges (unless its from the knife I just plunged into your eye) lol you get the picture.
  • poohbear - Tuesday, January 24, 2006 - link

    technology moves forward at a slower pace then that mates. U expect every vid card to be a 9700pro?! right. there has to be a pace the developers can follow.
  • photoguy99 - Wednesday, January 25, 2006 - link

    I think we are agreeing with you -

    The article authors keep implying they have to struggle to push these cards to their limit because they are getting so powerful so fast.

    To your point, I do agree it's moving forward slow - relative to what people can make use of.

    For example 90% of Office users can not make use of a faster CPU.

    However 90% of gamers could make use of a faster GPU.

    So even though GPU performance is doubling faster than CPU performance they should keep it up because we can and will use every ounce of it.
  • Powermoloch - Tuesday, January 24, 2006 - link

    It is great to see that ATi is doing their part right ;)
  • photoguy99 - Tuesday, January 24, 2006 - link

    When DX10 is released with vist it seems like this card would be like having SM2.0 - you're behind the curve again.

    Yea, I know there is always something better around the corner - and I don't recommend waiting if you want a great card now.

    But I'm sure some people would like to know.
  • Spoelie - Thursday, January 26, 2006 - link

    Not at all, I do not see DX10 arriving before vista near the end of this year. If it does earlier it will not make any splash whatsoever on game development before that. Even so, you cannot be 'behind' if you're only competitor is still at SM3.0 as well. As far as I can tell, there will be no HARD architectural changes in G71/7900 - they might improve tidbits here and there, like support for AA while doing HDR rendering, but that will be about the full extent of changes.
  • DigitalFreak - Tuesday, January 24, 2006 - link

    True, but I'm betting it will be quite a while before we see any DX10 games. I would suspect that the R620/G80 will be DX10 parts.
  • timmiser - Tuesday, January 24, 2006 - link

    I expect that Microsoft's Flight Simulator X will be the first DX10 game.
  • hwhacker - Tuesday, January 24, 2006 - link

    Question to Derek (or whomever):

    Perhaps I interpreted something wrong, but is it correct that you're saying X1900 is more of a 12x4 technology (because of fetch4) than the 16x3 we always thought? If so, that would make it A LOT more like Xenos, and perhaps R600, which makes sense, if I recall their ALU setup correctly (Xenos is 16x4, one for stall, so effective 16x3). R520 was 16x1, so...I gotta ask...Does this mean a 16x4 is imminent, or am I just reading the information incorrectly?

    If that's true, ATi really did mess with the definition of a pipeline.

    I can hear the rumours now...R590 with 16 QUADS, 16 ROPs, 16 TMUs, and 64 pixel processors...Oh yeah, and GDDR4 (on a 80nm process.) You heard it here first. ;)

Log in

Don't have an account? Sign up now