Memory Architecture

One of the newest features of the X1000 series is something ATI calls a "ring bus" memory architecture. The general idea behind the design is to improve memory bandwidth effectiveness while reducing cache misses, resulting in overall better memory performance. The architecture already supports GDDR4, but current boards have to settle for the fastest GDDR3 available until memory makers ship GDDR4 parts.

For quite some time, the high end in graphics memory architecture has been a straight forward 256-bit bus divided into four 64-bit channels on the GPU. The biggest issues with scaling up this type of architecture are routing, packaging and clock speed. Routing 256 wires from the GPU to RAM is quite complex. Cards with large buses require printed circuit boards (PCBs) with more layers than a board with a smaller bus in order to compensate for the complexity.

In order to support such a bus, the GPU has to have 256 physical external connections. Adding more and more external connections to a single piece of silicon can also contribute to complexities in increasing clock speed or managing clock speeds between the memory devices and the GPU. In the push for ever improving performance, increasing clock speed and memory bandwidth are constantly evaluated for cost and benefit.

Rather than pushing up the bit width of the bus to improve performance, ATI has taken another approach: improving the management and internal routing of data. Rather than 4 64-bit memory interfaces hooked into a large on die cache, the GPU hass 4 "ring stops" that connect to each other, graphics memory, and multiple caches and clients within the GPU. Each "ring stop" has 2 32-bit connections to 2 memory modules and 2 outgoing 256-bit connections to 2 other ring stops. ATI calls this a 512-bit Ring Bus (because there are 2 256-bit rings going around the ring stops).



Routing incoming memory through a 512-bit internal bus helps ATI to get data where it needs to go quickly. Each of the ring stops connects to a different set of caches. There are 30+ independent clients that require memory access within an X1000 series GPU. When one of these clients needs data not in a cache, the memory controller forwards the request to the ring stop attached to the physical memory with the data required. That ring stop then forwards the data around the ring to the ring stop (and cache) nearest the requesting client.



The primary function of memory management shifts to keeping the caches full with relevant information. Rather than having the memory controller on the GPU aggregate requests and control bandwidth, the memory controllers and ring bus work to keep data closer to the hardware that needs it most and can deal with each 32-bit channel independently. This essentially trades bandwidth efficiency for improved latency between memory and internal clients that require data quickly. With writes cached and going though the crossbar switch and the ring bus keeping memory moving to the cache nearest the clients that need data, ATI is able to tweak their caches to fit the new design as well.

On previous hardware, caches were direct mapped or set associative. This means that every address in memory maps to a specific cache line (or set in set associative). With larger caches, direct mapped and set associative designs work well (like L3 and L2 caches on a CPU). If a smaller cache is direct mapped, it is very easy for useful data to get kicked out too early by other data. Conversely, a large fully associative cache is inefficient as the entire cache must be searched for a hit rather than one line (direct mapped) or one block (set associative).



It makes sense that ATI would move to a fully associative cache in this situation. If they had large cache that serviced the entire range of clients and memory, a direct mapped (or more likely some n-way set associative cache) could make sense. With this new ring bus, if ATI split caches into multiple smaller blocks that service specific clients (as it appears they may have done), fully associative caches do make sense. Data from memory will be able to fill up the cache no matter where it's from, and searching smaller caches for hits shouldn't cut into latency too much. In fact, with a couple fully associative caches heavily populated with relevant data, overall latency should be improved. ATI showed us some Z and texture cache miss rates releative to X850. This data indicates anywhere from 5% to 30% improvement in cache miss rates among a few popular games from their new system.

The following cache miss scaling graphs are not data collected by us, but reported by ATI. We do not currently have a way to reproduce data like this. While assuring that the test is impartial and accurate is not possible in this situation (so take it with a grain of salt), the results are interesting enough for us to share them.





In the end, if common data patterns are known, cache design is fairly simple. It is easy to simulate cache hit/miss data based on application traces. A fully associative cache has its down sides (latency and complexity), so simply implementing them everywhere is not an option. Rather than accepting that fully associative caches are simply "better", it is much safer to say that a fully associative cache fits the design and makes better use of available resources on X1000 series hardware when managing data access patterns common in 3d applications.

Generally, bandwidth is more important than latency with graphics hardware as parallelism lends itself to effective bandwidth utilization and latency hiding. At the same time, as the use of flow control and branching increase, latency could potentially become more important than it is now.

The final new aspect of ATI's memory architecture is programmable bus arbitration. ATI is able to update and adapt the way the driver/hardware prioritizes memory access. The scheme is designed to weight memory requests based on a combination of latency and priority. The priority based scheme allows the system to determine and execute the most critical and important memory requests first while allowing data less sensitive to latency to wait its turn. The impression we have is that requests are required to complete within a certain number of cycles in order to prevent the starvation of any given thread, so the longer a request waits the higher its priority becomes.

ATI's ring bus architecture is quite interesting in and of itself, but there are some added benefits that go along with such a design. Altering the memory interface to connect with each memory device independently (rather than in 4 64-bit wide busses) gives ATI some flexibility. Individually routing lines in 32-bit groups helps to make routing connections more manageable. It's possible to increase stability (or potential clock speed) with simpler connections. We've already mentioned that ATI is ready to support GDDR4 out of the box, but there is also quite a bit of potential for hosting very high clock speed memory with this architecture. This is of limited use to customers who buy the product now, but it does give ATI the potential to come out with new parts as better and faster memory becomes available. The possibility of upgrading the 2 32-bit connections to something else is certainly there, and we hope to see something much faster in the future.

Unfortunately, we really don't have any reference point or testable data to directly determine the quality of this new design. Benchmarks will show how the platform as a whole performs, but whether the improvements come from the pixel pipelines, vertex pipelines, the memory controller, ring architecture, etc. is difficult to say.

Pipeline Layout and Details High Quality AF
Comments Locked

103 Comments

View All Comments

  • Wellsoul2 - Wednesday, October 5, 2005 - link

    I really prefer ATI so this is a disappointment.

    The 1300 and 1600 are pretty weak.

    Might as well keep my 9600XT versus the 1300 - Can still play HL2 with noAA/AF.

    The only good thing is maybe the price will drop on the x800/850 line.

    The X1800 seems like a good card but why pay that money.

    Why bother with the shared memory cards? It's dumb.
  • Cookie Crusher - Wednesday, October 5, 2005 - link

    grammar is actually spelled with an "a" ;)
  • OvErHeAtInG - Wednesday, October 5, 2005 - link

    Yes, I have a feeling it'll be one of those cases where they make some editions and fixes to the article. Not that horrible, come on - I do agree the graphs are confusing. More important than graphs of benches, though, for me is the examination of the new AA, the architecture, features etc. Which they did a fair job of

    One remark: the bulleted lists are missing the bullets ... e.g. on page 2 the list of new features.
  • bldckstark - Wednesday, October 5, 2005 - link

    Yes, this is the worst article I have ever seen posted on Anandtech. Will Anandtech continue to be my first stop on my daily hardware fix? Yes. Will I ever make Toms Hardware my first stop again? No. JEEEEZ toms sucks now. If you want to complain about a site as a whole take a look at them. They actually posted articles about how to pick up chicks while gaming! Multiple articles! Good Lord.
  • Houdani - Wednesday, October 5, 2005 - link

    Agreed! They did do a nice analysis of the new architecture.
    Agreed! Where are the bullets? (page 2 feature list, page 7 games list).
  • tfranzese - Wednesday, October 5, 2005 - link

    Everyone's always surpised by this. Why? They've done this countless times now as if it's acceptable. Seriously, don't post an article until it's done and have it proofread carefully before posting it. I honestly doubt your (Anandtech) editors are doing more than just skimming articles sometimes with the number of typos and gramatical errors I come across.

    I hope the quality goes back up, because it will eventually hurt your reputation.
  • tfranzese - Wednesday, October 5, 2005 - link

    I'll add, Anandtech is almost always my first stop to read a breaking review. Unfortunately, truths such as that below could someday change that. Today, Tech Report had the better article.

    quote:

    We will have tables of all the data with all the numbers we ran across all the resolutions with 4xAA and 8xAF up shortly.

    Quite a bit of data was collected and it has taken some time to organize. You are absolutely right to want more, and we are working on getting it out the door as soon as possible.

    Thanks,
    Derek Wilson


    Not their worst article, but things should be improving - not getting worse.
  • AnandThenMan - Wednesday, October 5, 2005 - link

    I agree. VERY WEAK REVIEW! Terrible. Honestly, what happened? Anandtech is usually much, much more with it. Disappointed.

    As for the R520, I think I'm like most people and just feel, meh.
  • misterspoot - Wednesday, October 5, 2005 - link

    Since the X1800 SKUs will not have the AGP bridge available (PCI-E) only, that leaves the X1600XT to attempt to give us AGP users a performance boost.

    Sadly, the X1600XT performs barely on par with a GeForce 6600GT -- which can be had for $150. Then, looking at the performance of the X1600XT, and comparing it to the X850 XT-PE -- surprise surprise, the year-plus old X850 XT is considerably superior.

    So if you're like me and built your box nearly 2 years ago, and have no choice but to buy an AGP part, it looks like the X850 XT-PE is going to be the highest performance part you can buy. Looks like I'll be grabbing one this weekend, so my performance in raids on Molten Core is drastically improved (runs a 6600GT at 1600x900 with minimum detail settings -- suffers from mid 20fps all the time while trying to tank).
  • DRavisher - Wednesday, October 5, 2005 - link

    The review states: "With its 512MB of onboard RAM, the X1800 XT scales especially well at high resolutions,". From what I see it scales very poorly at high resolutions compared to the 7800GTX 256MB card. Just look at what happens in SC:CT and FarCry. The XT goes from having a substantial lead in 1600x1200 to being about equal with the 7800GTX at 2048x1536.

Log in

Don't have an account? Sign up now