Memory Architecture

One of the newest features of the X1000 series is something ATI calls a "ring bus" memory architecture. The general idea behind the design is to improve memory bandwidth effectiveness while reducing cache misses, resulting in overall better memory performance. The architecture already supports GDDR4, but current boards have to settle for the fastest GDDR3 available until memory makers ship GDDR4 parts.

For quite some time, the high end in graphics memory architecture has been a straight forward 256-bit bus divided into four 64-bit channels on the GPU. The biggest issues with scaling up this type of architecture are routing, packaging and clock speed. Routing 256 wires from the GPU to RAM is quite complex. Cards with large buses require printed circuit boards (PCBs) with more layers than a board with a smaller bus in order to compensate for the complexity.

In order to support such a bus, the GPU has to have 256 physical external connections. Adding more and more external connections to a single piece of silicon can also contribute to complexities in increasing clock speed or managing clock speeds between the memory devices and the GPU. In the push for ever improving performance, increasing clock speed and memory bandwidth are constantly evaluated for cost and benefit.

Rather than pushing up the bit width of the bus to improve performance, ATI has taken another approach: improving the management and internal routing of data. Rather than 4 64-bit memory interfaces hooked into a large on die cache, the GPU hass 4 "ring stops" that connect to each other, graphics memory, and multiple caches and clients within the GPU. Each "ring stop" has 2 32-bit connections to 2 memory modules and 2 outgoing 256-bit connections to 2 other ring stops. ATI calls this a 512-bit Ring Bus (because there are 2 256-bit rings going around the ring stops).



Routing incoming memory through a 512-bit internal bus helps ATI to get data where it needs to go quickly. Each of the ring stops connects to a different set of caches. There are 30+ independent clients that require memory access within an X1000 series GPU. When one of these clients needs data not in a cache, the memory controller forwards the request to the ring stop attached to the physical memory with the data required. That ring stop then forwards the data around the ring to the ring stop (and cache) nearest the requesting client.



The primary function of memory management shifts to keeping the caches full with relevant information. Rather than having the memory controller on the GPU aggregate requests and control bandwidth, the memory controllers and ring bus work to keep data closer to the hardware that needs it most and can deal with each 32-bit channel independently. This essentially trades bandwidth efficiency for improved latency between memory and internal clients that require data quickly. With writes cached and going though the crossbar switch and the ring bus keeping memory moving to the cache nearest the clients that need data, ATI is able to tweak their caches to fit the new design as well.

On previous hardware, caches were direct mapped or set associative. This means that every address in memory maps to a specific cache line (or set in set associative). With larger caches, direct mapped and set associative designs work well (like L3 and L2 caches on a CPU). If a smaller cache is direct mapped, it is very easy for useful data to get kicked out too early by other data. Conversely, a large fully associative cache is inefficient as the entire cache must be searched for a hit rather than one line (direct mapped) or one block (set associative).



It makes sense that ATI would move to a fully associative cache in this situation. If they had large cache that serviced the entire range of clients and memory, a direct mapped (or more likely some n-way set associative cache) could make sense. With this new ring bus, if ATI split caches into multiple smaller blocks that service specific clients (as it appears they may have done), fully associative caches do make sense. Data from memory will be able to fill up the cache no matter where it's from, and searching smaller caches for hits shouldn't cut into latency too much. In fact, with a couple fully associative caches heavily populated with relevant data, overall latency should be improved. ATI showed us some Z and texture cache miss rates releative to X850. This data indicates anywhere from 5% to 30% improvement in cache miss rates among a few popular games from their new system.

The following cache miss scaling graphs are not data collected by us, but reported by ATI. We do not currently have a way to reproduce data like this. While assuring that the test is impartial and accurate is not possible in this situation (so take it with a grain of salt), the results are interesting enough for us to share them.





In the end, if common data patterns are known, cache design is fairly simple. It is easy to simulate cache hit/miss data based on application traces. A fully associative cache has its down sides (latency and complexity), so simply implementing them everywhere is not an option. Rather than accepting that fully associative caches are simply "better", it is much safer to say that a fully associative cache fits the design and makes better use of available resources on X1000 series hardware when managing data access patterns common in 3d applications.

Generally, bandwidth is more important than latency with graphics hardware as parallelism lends itself to effective bandwidth utilization and latency hiding. At the same time, as the use of flow control and branching increase, latency could potentially become more important than it is now.

The final new aspect of ATI's memory architecture is programmable bus arbitration. ATI is able to update and adapt the way the driver/hardware prioritizes memory access. The scheme is designed to weight memory requests based on a combination of latency and priority. The priority based scheme allows the system to determine and execute the most critical and important memory requests first while allowing data less sensitive to latency to wait its turn. The impression we have is that requests are required to complete within a certain number of cycles in order to prevent the starvation of any given thread, so the longer a request waits the higher its priority becomes.

ATI's ring bus architecture is quite interesting in and of itself, but there are some added benefits that go along with such a design. Altering the memory interface to connect with each memory device independently (rather than in 4 64-bit wide busses) gives ATI some flexibility. Individually routing lines in 32-bit groups helps to make routing connections more manageable. It's possible to increase stability (or potential clock speed) with simpler connections. We've already mentioned that ATI is ready to support GDDR4 out of the box, but there is also quite a bit of potential for hosting very high clock speed memory with this architecture. This is of limited use to customers who buy the product now, but it does give ATI the potential to come out with new parts as better and faster memory becomes available. The possibility of upgrading the 2 32-bit connections to something else is certainly there, and we hope to see something much faster in the future.

Unfortunately, we really don't have any reference point or testable data to directly determine the quality of this new design. Benchmarks will show how the platform as a whole performs, but whether the improvements come from the pixel pipelines, vertex pipelines, the memory controller, ring architecture, etc. is difficult to say.

Pipeline Layout and Details High Quality AF
Comments Locked

103 Comments

View All Comments

  • mlittl3 - Wednesday, October 5, 2005 - link

    I'll tell you how it is a win. Take a 8 less pipeline architecture, put it onto a brand new 0.90nm die shrink, clock the hell out of the thing, consume just a little more power and add all the new features like sm3.0 and you equal the competition's fastest card. This is a win. So when ATI releases 1,2,3 etc. more quad pipes, they will be even faster.

    I don't see anything bob. Anandtech's review was a very bad one. ALL the other sites said this was is good architecture and is on par with and a little faster than nvidia. None of those conclusions can be drawn from the confusing graphs here.

    Read the comments here and you will see others agree. Good job, ATI and Nvidia for bringing us competition and equal performing cards. Now bob, go to some other sites, get a good feel for which card suits your needs, and then go buy one. :)
  • bob661 - Wednesday, October 5, 2005 - link

    I read the other sites as well as AT. Quite frankly, I trust AT before any of the other sites because their methodology and consistancy is top notch. HardOCP didn't even test a X1800XT and if I was an avid reader of their site I'd be wondering where that review was. I guess I don't see it your way because I only look for bang for the buck not which could be better if it had this or had that. BTW, I just got some free money (no, I didn't steal it!) today so I'm going to pick up a 7800GT. :)
  • Houdani - Wednesday, October 5, 2005 - link

    One of the reasons for the card selections is due to the price of the cards -- and was stated as such. Just because ATI is calling the card "low-end" doesn't mean it should be compared with other low-end cards. If ATI prices their "low-end" card in the same range as a mid-range card, then it should rightfully be compared to those other cards which are at/near the price.

    But your point is well taken. I'd like to see a few more cards tossed in there.
  • Madellga - Wednesday, October 5, 2005 - link

    Derek, I don't know if you have the time for this, but a review at other website showed a huge difference in performance at the Fear Demo. Ati was in the lead with substantial advantage for the maximum framerates, but near at minimum.

    http://techreport.com/reviews/2005q4/radeon-x1000/...">http://techreport.com/reviews/2005q4/radeon-x1000/...

    As Fear points towards the new generation of engines, it might be worth running some numbers on it.

    Also useful would be to report minimum framerates at the higher resolutions, as this relates to good gameplay experience if all goodies are cranked up.
  • Houdani - Wednesday, October 5, 2005 - link

    Well, the review does state that the FEAR Demo greatly favors ATI, but that the actual shipping game is expected to not show such bias. Derek purposefully omitted the FEAR Demo in order to use the shipping game instead.
  • allnighter - Wednesday, October 5, 2005 - link

    Is it safe to assume that you guys might not have had enough time with these cards to do your usuall in-depth review? I'm sure you'll update for us to be able to get the full picture. I also must say that I'm missing the oc part of the review. I wanted to see how true it is taht these chips can go sky hig.> Given the fact that they had 3 re-spins it may as well be true.
  • TinyTeeth - Wednesday, October 5, 2005 - link

    ...an Anandtech review.

    But it's a bit thin, I must say. I'm still missing overclocking results and Half-Life 2 and Battlefield 2 results. How come no hardware site has tested the cards in Battlefield 2 yet?

    From my point of view, Doom III, Splinter Cell, Everquest II and Far Cry are the least interesting games out there.

    Overall it's a good review as you can expect from the absolutely best hardware site there is, but I hope and expect there will be another, much larger review.
  • Houdani - Wednesday, October 5, 2005 - link

    The best reason to continue benchmarking games which have been out for a while is because those are the games which the older GPUs were previously benched. When review sites stop using the old benchmarks, they effectively lose the history for all of the older GPU's, and therefore we lose those GPUs in the comparison.

    Granted, the review is welcome to re-benchmark the old GPUs using the new games ... but that would be a significant undertaking and frankly I don't see many (if any) review sites doing that.

    But I will throw you this bone: While I think it's quite appropriate to use benchmarks for two years (maybe even three years), it would also be a good thing to very slowly introduce new games at a pace of one per year, and likewise drop one game per year.
  • mongoosesRawesome - Wednesday, October 5, 2005 - link

    they have to retest whenever they use a different driver/CPU/motherboard, which is quite often. I bet they have to retest every other article or so. Its a pain in the butt, but thats why we visit and don't do the tests ourselves.
  • Madellga - Wednesday, October 5, 2005 - link

    Techreport has Battlefield 2 benchmarks, as Fear, Guild Wars and others. I liked the article, recommend that you read also.

Log in

Don't have an account? Sign up now