Memory Architecture

One of the newest features of the X1000 series is something ATI calls a "ring bus" memory architecture. The general idea behind the design is to improve memory bandwidth effectiveness while reducing cache misses, resulting in overall better memory performance. The architecture already supports GDDR4, but current boards have to settle for the fastest GDDR3 available until memory makers ship GDDR4 parts.

For quite some time, the high end in graphics memory architecture has been a straight forward 256-bit bus divided into four 64-bit channels on the GPU. The biggest issues with scaling up this type of architecture are routing, packaging and clock speed. Routing 256 wires from the GPU to RAM is quite complex. Cards with large buses require printed circuit boards (PCBs) with more layers than a board with a smaller bus in order to compensate for the complexity.

In order to support such a bus, the GPU has to have 256 physical external connections. Adding more and more external connections to a single piece of silicon can also contribute to complexities in increasing clock speed or managing clock speeds between the memory devices and the GPU. In the push for ever improving performance, increasing clock speed and memory bandwidth are constantly evaluated for cost and benefit.

Rather than pushing up the bit width of the bus to improve performance, ATI has taken another approach: improving the management and internal routing of data. Rather than 4 64-bit memory interfaces hooked into a large on die cache, the GPU hass 4 "ring stops" that connect to each other, graphics memory, and multiple caches and clients within the GPU. Each "ring stop" has 2 32-bit connections to 2 memory modules and 2 outgoing 256-bit connections to 2 other ring stops. ATI calls this a 512-bit Ring Bus (because there are 2 256-bit rings going around the ring stops).



Routing incoming memory through a 512-bit internal bus helps ATI to get data where it needs to go quickly. Each of the ring stops connects to a different set of caches. There are 30+ independent clients that require memory access within an X1000 series GPU. When one of these clients needs data not in a cache, the memory controller forwards the request to the ring stop attached to the physical memory with the data required. That ring stop then forwards the data around the ring to the ring stop (and cache) nearest the requesting client.



The primary function of memory management shifts to keeping the caches full with relevant information. Rather than having the memory controller on the GPU aggregate requests and control bandwidth, the memory controllers and ring bus work to keep data closer to the hardware that needs it most and can deal with each 32-bit channel independently. This essentially trades bandwidth efficiency for improved latency between memory and internal clients that require data quickly. With writes cached and going though the crossbar switch and the ring bus keeping memory moving to the cache nearest the clients that need data, ATI is able to tweak their caches to fit the new design as well.

On previous hardware, caches were direct mapped or set associative. This means that every address in memory maps to a specific cache line (or set in set associative). With larger caches, direct mapped and set associative designs work well (like L3 and L2 caches on a CPU). If a smaller cache is direct mapped, it is very easy for useful data to get kicked out too early by other data. Conversely, a large fully associative cache is inefficient as the entire cache must be searched for a hit rather than one line (direct mapped) or one block (set associative).



It makes sense that ATI would move to a fully associative cache in this situation. If they had large cache that serviced the entire range of clients and memory, a direct mapped (or more likely some n-way set associative cache) could make sense. With this new ring bus, if ATI split caches into multiple smaller blocks that service specific clients (as it appears they may have done), fully associative caches do make sense. Data from memory will be able to fill up the cache no matter where it's from, and searching smaller caches for hits shouldn't cut into latency too much. In fact, with a couple fully associative caches heavily populated with relevant data, overall latency should be improved. ATI showed us some Z and texture cache miss rates releative to X850. This data indicates anywhere from 5% to 30% improvement in cache miss rates among a few popular games from their new system.

The following cache miss scaling graphs are not data collected by us, but reported by ATI. We do not currently have a way to reproduce data like this. While assuring that the test is impartial and accurate is not possible in this situation (so take it with a grain of salt), the results are interesting enough for us to share them.





In the end, if common data patterns are known, cache design is fairly simple. It is easy to simulate cache hit/miss data based on application traces. A fully associative cache has its down sides (latency and complexity), so simply implementing them everywhere is not an option. Rather than accepting that fully associative caches are simply "better", it is much safer to say that a fully associative cache fits the design and makes better use of available resources on X1000 series hardware when managing data access patterns common in 3d applications.

Generally, bandwidth is more important than latency with graphics hardware as parallelism lends itself to effective bandwidth utilization and latency hiding. At the same time, as the use of flow control and branching increase, latency could potentially become more important than it is now.

The final new aspect of ATI's memory architecture is programmable bus arbitration. ATI is able to update and adapt the way the driver/hardware prioritizes memory access. The scheme is designed to weight memory requests based on a combination of latency and priority. The priority based scheme allows the system to determine and execute the most critical and important memory requests first while allowing data less sensitive to latency to wait its turn. The impression we have is that requests are required to complete within a certain number of cycles in order to prevent the starvation of any given thread, so the longer a request waits the higher its priority becomes.

ATI's ring bus architecture is quite interesting in and of itself, but there are some added benefits that go along with such a design. Altering the memory interface to connect with each memory device independently (rather than in 4 64-bit wide busses) gives ATI some flexibility. Individually routing lines in 32-bit groups helps to make routing connections more manageable. It's possible to increase stability (or potential clock speed) with simpler connections. We've already mentioned that ATI is ready to support GDDR4 out of the box, but there is also quite a bit of potential for hosting very high clock speed memory with this architecture. This is of limited use to customers who buy the product now, but it does give ATI the potential to come out with new parts as better and faster memory becomes available. The possibility of upgrading the 2 32-bit connections to something else is certainly there, and we hope to see something much faster in the future.

Unfortunately, we really don't have any reference point or testable data to directly determine the quality of this new design. Benchmarks will show how the platform as a whole performs, but whether the improvements come from the pixel pipelines, vertex pipelines, the memory controller, ring architecture, etc. is difficult to say.

Pipeline Layout and Details High Quality AF
Comments Locked

103 Comments

View All Comments

  • Gigahertz19 - Wednesday, October 5, 2005 - link

    On the last page I will quote

    "With its 512MB of onboard RAM, the X1800 XT scales especially well at high resolutions, but we would be very interested in seeing what a 512MB version of the 7800 GTX would be capable of doing."

    Based on the results in the benchmarks I would say 512MB barely does anything. Look at the benchmarks on Page 10 the Geforce 7800GTX either beats the X1800 XT or loses by less then 1 FPS. SCALES WELL AT HIGH RESOLUTIONS? Not really, has the author of this article looked at their own benchmarks included? When the resolution is at 2048 x 1536 the 7800GTX creams the competition except in Farcry where it loses by .2FPS to the X1800XT and Splinter Cell it loses by .8FPS so basically it's a tie in those 2 games.

    You know why Nvidia does not have a 512MB version because look at the results...it does shit. 512Mb is pointless right now and if you argue you'll use it for the future then will till future games use it and then buy the best GPU then, not now. These new ATI's blow wookies, so much for competition.
  • NeonFlak - Wednesday, October 5, 2005 - link

    "In some cases, the X1800 XL is able to compete with the 7800 GTX, but not enough to warrant pricing on the same level."

    From the graphs in the review with all the cards present the x1800xl only beat the 7800gt once by 4fps... So beating the 7800gt in one graph by 4fps makes that statement even viable?
  • FunkmasterT - Wednesday, October 5, 2005 - link

    EXACTLY!!

    ATI's FPS numbers are a major disappointment!
  • Questar - Wednesday, October 5, 2005 - link

    Unless you want image quality.
  • bob661 - Wednesday, October 5, 2005 - link

    And the difference is worth the $100 eatra dollars PLUS the "lower" frame rates? Not good bang for the buck.
  • Powermoloch - Wednesday, October 5, 2005 - link

    Not the cards....Just the review. Really sad :(
  • yacoub - Wednesday, October 5, 2005 - link

    So $450 for the X1800XL versus $250 for the X800XL and the only difference is the new core that maybe provides a handful of additional frames per second, a new AA mode, and shader model 3.0?

    Sorry, that's not worth $200 to me. Not even close.
  • coldpower27 - Thursday, October 6, 2005 - link


    Perhaps a up to 20% performance improvement, looking at pixel fillrate alone.
    Shader Model 3.0 Support.
    ATI's Avivo Technology
    OpenEXR HDR Support.
    HQ Non-Angle Dependent AF User Choice

    You decide if that's worth the 200US price difference to you, Adaptive AA, I wouldn't count as apparently through ATI's driver all R3xx hardware and higher now have this capability not just R5xx derivatives, sort of like the launched with R4xx feature Temporal AA.
  • yacoub - Wednesday, October 5, 2005 - link

    So even if these cards were available in stores/online today, the best PCI-E card one can buy for ~$250 is still either an X800XL or a 6800GT. (Or an X800 GTO2 for $230 and flash and overclock it.)

    I find it disturbing that they even waste the time to develop, let alone release, low-end parts that price-wise can't even compete. Why bother wasting the development and processing to create a card that costs more and performs less? What a joke those two lower-end cards are (x1300 and x1600).
  • coldpower27 - Thursday, October 6, 2005 - link

    The Radeon X1600 XT is intended to replace the older X700 Pro, not the stop gap 6600 GT competitors, X800 GT, X800 GTO, which only came into being because ATI had leftoever supplies of R423/R480 & for X800 GTO only R430 cores and of course due to the fact that X700 Pro wasn't really competitive in performance to 600 GT in the firstp lace, due to ATI's reliance on Low-k technology for their high clock frequencies.

    I think these are sucessful replacements.

    Radeon X850/X800 is replaced by Radeon X1800 Technology.
    Radeon X700 is replaced by Radeon X1600 Technology.
    Radeon X550/X300 is replaced by Radeon X1300 Technology.

    X700 is 156mm2 on 110nm, X1600 is 132mm2 on 90nm
    X550 & X1300 are roughly around the same die size, sub 100mm2.

    Though the newer cards use more expensive memory types on their high end versions.

    They also finally bring ATI's entire family as having the same feature set, something that hasn't been seen ever before by ATI I believe. I mean having a high end, mainstream & budget core based on the same technology.

    Nvidia achieved this item first with the Geforce FX line.

Log in

Don't have an account? Sign up now