R580 Architecture

The architecture itself is not that different from the R520 series. There are a couple tweaks that found their way into the GPU, but these consist mainly of the same improvements made to the RV515 and RV530 over the R520 due to their longer lead time (the only reason all three parts arrived at nearly the same time was because of a bug that delayed the R520 by a few months). For a quick look at what's under the hood, here's the R520 and R580 vertex pipeline:



and the internals of each pixel quad:



The real feature of interest is the ability to load and filter 4 texture addresses from a single channel texture map. Textures which describe color generally have four components at every location in the texture, and normally the hardware will load an address from a texture map, split the 4 channels and filter them independently. In cases where single channel textures are used (ATI likes to use the example of a shadow map), the R520 will look up the appropriate address and will filter the single channel (letting the hardware's ability to filter 3 other components go to waste). In what ATI calls it's Fetch4 feature, the R580 is capable of loading 3 other adjacent single channel values from the texture and filtering these at the same time. This effectively loads 4 and filters four times the texture data when working with single channel formats. Traditional color textures, or textures describing vector fields (which make use of more than one channel per position in the texture) will not see any performance improvement, but for some soft shadowing algorithms performance increases could be significant.

That's really the big news in feature changes for this part. The actual meat of the R580 comes in something Tim Allen could get behind with a nice series of manly grunts: More power. More power in the form of a 384 million transistor 90nm chip that can push 12 quads (48 pixels) worth of data around at a blisteringly fast 650MHz. Why build something different when you can just triple the hardware?



To be fair, it's not a straight tripling of everything and it works out to look more like 4 X1600 parts than 3 X1800 parts. The proportions work out to match what we see in the current midrange part: all you need for efficient processing of current games is a three to one ratio of pixel pipelines to render backends or texture units. When the X1000 series initially launched, we did look at the X1800 as a part that had as much crammed into it as possible while the X1600 was a little more balanced. Focusing on pixel horsepower makes more efficient use of texture and render units when processing complex and interesting shader programs. If we see more math going on in a shader program than texture loads, we don't need enough hardware to load a texture every single clock cycle for every pixel when we can cue them up and aggregate requests in order to keep available resources busy more consistently. With texture loads required to hide latency (even going to local video memory isn't instantaneous yet), handling the situation is already handled.

Other than keeping the number of texture and render units the same as the X1800 (giving the X1900 the same ratios of math to texture/fill rate power as the X1600), there isn't much else to say about the new design. Yes, they increased the number of registers in proportion to the increase in pixel power. Yes they increased the width of the dispatch unit to compensate for the added load. Unfortunately, ATI declined allowing us to post the HDL code for their shader pipeline citing some ridiculous notion that their intellectual property has value. But we can forgive them for that.

This handy comparison page will have to do for now.

Index Details of the Cards
Comments Locked

120 Comments

View All Comments

  • poohbear - Tuesday, January 24, 2006 - link

    $500 too much? there are cars for $300, 000+, but u dont see the majority of ppl complaining because they're NOT aimed at u and me and ferrari & lamborghini could care less what we think cause we're not their target audience. get over yourself, there ARE cards for you in the $100+ $300, so what are u worried about?
  • timmiser - Tuesday, January 24, 2006 - link

    While I agree with what you are saying, we are already on our 3rd generation of $500 high end graphic cards. If memory serves, it was the Nvidia 6800 that broke the $500 barrier for a single card solution.

    I'm just happy it seems to have leveled off at $500.
  • Zebo - Tuesday, January 24, 2006 - link

    Actually GPU's in general scale very well with price/performance and this is no exception. Twice as fast as a 850 XT which you can get for $275 should cost twice as much or $550 which it does. If you want to complain about prices look at CPUs, high end memory and raptors/SCSI which higher line items offer small benefits for huge price premiums.
  • fishbits - Tuesday, January 24, 2006 - link

    Geez, talk about missing the point. News flash: Bleeding edge computer gear costs a lot. $500 is an excellent price for the best card out. Would I rather have it for $12? Yes. Can I afford/justify a $500 gfx card? No, but more power to those who can, and give revenue to ATI/Nvidia so that they can continue to make better cards that relatively quickly fall within my reach. I can't afford a $400 9800 pro either... whoops! They don't cost that much now, do they?

    quote:

    Even if you played a game thats needs it you should be pissed at the game company thats puts a blot mess thats needs a $500 card.

    Short-sighted again. Look at the launch of Unreal games for instance. Their code is always awesome on the performance side, but can take advantage of more power than most have available at release time. You can tell them their code is shoddy, good luck with that. In reality it's great code that works now, and your gaming enjoyment is extended as you upgrade over time and can access better graphics without having to buy a new game. Open up your mind, quit hating and realize that these companies are giving us value. You can't afford it now, neither can I, but quit your crying and applaud Nv/ATI for giving us constantly more powerful cards.
  • aschwabe - Tuesday, January 24, 2006 - link

    Agreed, I'm not sure how anyone constitutes $500 for ONE component a good price. I'll pay no more than 300-350 for a vid card.
  • bamacre - Tuesday, January 24, 2006 - link

    Hear, hear!! A voice of reason!
  • rqle - Tuesday, January 24, 2006 - link

    I like new line graph color and interface, but i like bar graph so much more. Never a big fan over SLI or Crossfire on the graph, makes its a distracting, especially it only represent a small group. Wonder if crossfire and sli can have their own graph by themselves or maybe their own color. =)
  • DerekWilson - Tuesday, January 24, 2006 - link

    it could be possible for us to look at multigpu solutions serpeately, but it is quite relevant to compare single card performance to multigpu performance -- especially when trying to analyze performance.
  • Live - Tuesday, January 24, 2006 - link

    Good reading! Good to see ATI getting back in the game. Now lets see some price competition for a change.

    I don’t understand what CrossFire XTX means. I thought there was no XTX crossfire card? Since the Crossfire and XT have the same clocks it shouldn’t matter if the other card is a XTX. By looking at the graphs it would seem I was wrong but how can this be? This would indicate that the XTX has more going for it then just the clocks but that is not so, right?

    Bha I'm confused :)
  • DigitalFreak - Tuesday, January 24, 2006 - link

    My understanding is that Crossfire is async, so both cards run at their maximum speed. The XTX card runs at 650/1.55, while the Crossfire Edition card runs at 625/1.45. You're right, there is no Crossfire Edition XTX card.

Log in

Don't have an account? Sign up now