What is GDDR4?

The major advancement that makes the new X1950 series possible is the availability of GDDR4. This really is an incremental step forward from GDDR3 with an eye towards power saving. Of course, in the high end graphics world "power saving" is a euphemism for overclockability. Thus we have a technology designed to run efficiently and to be pushed beyond the limit of reason. Sometimes we can have our cake and eat it too. While the majority of the power savings come in at lower clock speeds, we will see in our tests that there are some power benefits at the high end as well.

We have gotten our hands on some information about GDDR4, and will do our best to extract the most useful data. The major advances of GDDR4 include a lower voltage requirement of 1.5V (or up to 1.9V if overclocking). At the low end, this offers a 30% power savings over GDDR3 clock for clock. We also see a fixed burst length of 8 bits with GDDR4 as opposed to 4 with GDDR3. This allows the RAM to run at half the core frequency while offering the same memory bandwidth as GDDR3, which results in significant power savings (a 2GHz data rate GDDR3 chip would run with a core clock of 500MHz, while GDDR4 can run at 250MHz). Alternately, this can be used to provide higher memory speeds in high end systems.

Data bus inversion (DBI) also makes its way into memory with GDDR4. This technique helps to lower the average power used by the bus by minimizing the number of zeros transmitted. At first glance, this might not make much sense, but it all has to do with how zeros are sent. These days, it's most common to see digital logic use active low signaling. This means that a digital 1 is actually represented by a low power state. This is ostensibly because it is easier to create a sink than a source (it's easier to pull voltage down from a high state than to raise it up from a ground state). This means that we are actually using more power when we are sending a zero because the signal for a zero is a high voltage state.

The way DBI works is that all the data is inverted if the current byte to be transmitted contains more than 4 zeros. A separate control bit (aptly named the DBI flag) is used to indicate whether the data is inverted on the bus or not. Here are a couple examples of what would happen when transmitting data over a bus using DBI.

data to send: 11100000
data on bus: 00011111, DBI Flag = 1

data to send: 11111000
data on bus: 11111000, DBI Flag = 0

Addressing is also done differently with GDDR4. If we are considering the 16Mx32 (this means 16 million address that hold 32bits of data each) 512Mbit GDDR4 modules currently available from Samsung, we will have only 12 address pins. A full address is sent in two consecutive clock cycles (as 24-bits are needed to select between 16 million addresses). This frees pins to use for other things, like power and ground which could increase the capability of the DRAM to run at high speeds. Among the other optimizations, a multi-cycle preamble is used to make sure that timing is accurate when sending and receiving data (allowing for faster speeds), GDDR4 has a lower input capacitance than GDDR3, and memory manufacturers have more control over the properties of the transistors and resistors used in the driver and receiver in order to better tune products to specific needs.

Right now, ATI is using Samsung's 80nm 0.91ns K4U52324QE GDDR4 modules on its X1950 products. This is actually the slowest GDDR4 memory that Samsung sells, clocking in at a max of 1100MHz. Their 0.714ns RAM is capable of hitting 1400MHz which will be able to put future graphics cards beyond the 2.5GHz data rate and up near the 80GB/s range in memory bandwidth. Of course, the X1950 XTX memory bandwidth of 59.6GB/s is pretty impressive in itself. From a clock for clock perspective, GDDR4 can offer advantages, but we shouldn't expect anything revolutionary at this point. We ran a couple tests underclocking the X1950 XTX, and saw performance on par with or slightly faster than the X1900 XTX.

Index The Test
Comments Locked

74 Comments

View All Comments

  • Vigile - Wednesday, August 23, 2006 - link

    My thought exactly on this one Anand...
  • Anand Lal Shimpi - Wednesday, August 23, 2006 - link

    You can run dual monitors with a CrossFire card as well, the CrossFire dongle that comes with the card has your 2nd DVI output on it :)

    Take care,
    Anand
  • kneecap - Wednesday, August 23, 2006 - link

    What about VIVO? The Crossfire Edition does not support that.
  • JarredWalton - Wednesday, August 23, 2006 - link

    For high-end video out, the DVI port is generally more useful anyway. It's also required if you want to hook up to a display using HDCP - I think that will work with a DVI-to-HDMI adapter, but maybe not? S-VIDEO and Composite out are basically becoming seldom used items in my experience, though the loss of component out is a bit more of a concern.
  • JNo - Thursday, August 24, 2006 - link

    So if I use DVI out and attach a DVI to HDMI adaptor before attaching to a projector or HDTV, will I get a properly encrypted signal to fully display future blu-ray/hd-dvd encrypted content?

    The loss of component is a bit of a concern as many HDTVs and projectors still produce amazing images with component and, in fact, I gather that some very high resolutions+refresh rates are possible on component but not DVI due to certain bandwidth limitations with DVI. But please correct me if I am wrong. I take Anandtech's point on the crossfire card offering more but with a couple of admittedly small quesiton marks, I see no reason not to get the standard card and crossfire for the second later if you decided to go that route...
  • JarredWalton - Thursday, August 24, 2006 - link

    I suppose theoretically component could run higher resolutions than DVI, with dual-link being required for 2048x1536 and higher. Not sure what displays support such resolutions with component inputs, though. Even 1080p can run off of single-link DVI.

    I think the idea with CF cards over standard is that they will have a higher resale value if you want to get rid of them in the future, and they are also more versatile -- TV out capability being the one exception. There are going to be a lot of people that get systems with a standard X1950 card, so if they want to upgrade to CrossFire in the future they will need to buy the CrossFire edition. We all know that at some point ATI is no longer going to make any of the R5xx cards, so if people wait to upgrade to CrossFire they might be forced to look for used cards in a year or two.

    Obviously, this whole scenario falls apart if street prices on CrossFire edition cards end up being higher than the regular cards. Given the supply/demand economics involved, that wouldn't be too surprising, but of course we won't know for another three or four weeks.
  • UNESC0 - Wednesday, August 23, 2006 - link

    thanks for clearing that up Anand, news to me!
  • TigerFlash - Wednesday, August 23, 2006 - link

    I was wondering if anyone thinks it's wise to get an intel core duo 2 motherboard with crossfire support now that AMD is buying out ATI. Do you think ATI would stop supporting Intel motherboards?
  • johnsonx - Wednesday, August 23, 2006 - link

    quote:

    Do you think ATI would stop supporting Intel motherboards?


    Of course not. AMD/ATI isn't stupid. Even if their cross-licensing agreement with Intel didn't prevent them from blocking Crossfire on Intel boards (which it almost surely does), cutting out that part of the market would be foolish.
  • dderidex - Wednesday, August 23, 2006 - link

    What's with the $99 -> $249 gap?

    Weren't we supposed to see an X1650XT, too? Based on RV570? ...or RV560? Something?

Log in

Don't have an account? Sign up now