What is GDDR4?

The major advancement that makes the new X1950 series possible is the availability of GDDR4. This really is an incremental step forward from GDDR3 with an eye towards power saving. Of course, in the high end graphics world "power saving" is a euphemism for overclockability. Thus we have a technology designed to run efficiently and to be pushed beyond the limit of reason. Sometimes we can have our cake and eat it too. While the majority of the power savings come in at lower clock speeds, we will see in our tests that there are some power benefits at the high end as well.

We have gotten our hands on some information about GDDR4, and will do our best to extract the most useful data. The major advances of GDDR4 include a lower voltage requirement of 1.5V (or up to 1.9V if overclocking). At the low end, this offers a 30% power savings over GDDR3 clock for clock. We also see a fixed burst length of 8 bits with GDDR4 as opposed to 4 with GDDR3. This allows the RAM to run at half the core frequency while offering the same memory bandwidth as GDDR3, which results in significant power savings (a 2GHz data rate GDDR3 chip would run with a core clock of 500MHz, while GDDR4 can run at 250MHz). Alternately, this can be used to provide higher memory speeds in high end systems.

Data bus inversion (DBI) also makes its way into memory with GDDR4. This technique helps to lower the average power used by the bus by minimizing the number of zeros transmitted. At first glance, this might not make much sense, but it all has to do with how zeros are sent. These days, it's most common to see digital logic use active low signaling. This means that a digital 1 is actually represented by a low power state. This is ostensibly because it is easier to create a sink than a source (it's easier to pull voltage down from a high state than to raise it up from a ground state). This means that we are actually using more power when we are sending a zero because the signal for a zero is a high voltage state.

The way DBI works is that all the data is inverted if the current byte to be transmitted contains more than 4 zeros. A separate control bit (aptly named the DBI flag) is used to indicate whether the data is inverted on the bus or not. Here are a couple examples of what would happen when transmitting data over a bus using DBI.

data to send: 11100000
data on bus: 00011111, DBI Flag = 1

data to send: 11111000
data on bus: 11111000, DBI Flag = 0

Addressing is also done differently with GDDR4. If we are considering the 16Mx32 (this means 16 million address that hold 32bits of data each) 512Mbit GDDR4 modules currently available from Samsung, we will have only 12 address pins. A full address is sent in two consecutive clock cycles (as 24-bits are needed to select between 16 million addresses). This frees pins to use for other things, like power and ground which could increase the capability of the DRAM to run at high speeds. Among the other optimizations, a multi-cycle preamble is used to make sure that timing is accurate when sending and receiving data (allowing for faster speeds), GDDR4 has a lower input capacitance than GDDR3, and memory manufacturers have more control over the properties of the transistors and resistors used in the driver and receiver in order to better tune products to specific needs.

Right now, ATI is using Samsung's 80nm 0.91ns K4U52324QE GDDR4 modules on its X1950 products. This is actually the slowest GDDR4 memory that Samsung sells, clocking in at a max of 1100MHz. Their 0.714ns RAM is capable of hitting 1400MHz which will be able to put future graphics cards beyond the 2.5GHz data rate and up near the 80GB/s range in memory bandwidth. Of course, the X1950 XTX memory bandwidth of 59.6GB/s is pretty impressive in itself. From a clock for clock perspective, GDDR4 can offer advantages, but we shouldn't expect anything revolutionary at this point. We ran a couple tests underclocking the X1950 XTX, and saw performance on par with or slightly faster than the X1900 XTX.

Index The Test
POST A COMMENT

74 Comments

View All Comments

  • SixtyFo - Friday, September 15, 2006 - link

    So do they still use a dongle between the cards? If you had 2 xfire cards then it won't be connecting to a dvi port. Is there an adaptor? I guess what I'm asking is are you REALLY sure I can run 2 crossfire ed. x1950s together? I'm about to drop a grand on video cards so that piece of info may come in handy. Reply
  • unclebud - Friday, September 01, 2006 - link

    "And 10Mhz beyond the X1600 XT is barely enough to warrant a different pair of letters following the model number, let alone a whole new series starting with the X1650 Pro."

    nvidia has been doing it for years with the 4mx/5200/6200/7300/whatever and nobody here said boo!
    hm.
    Reply
  • SonicIce - Thursday, August 24, 2006 - link

    How can a whole X1900XTX system use only 267 watts? So a 300w power supply could handle the system? Reply
  • DerekWilson - Saturday, August 26, 2006 - link

    generally you need something bigger than a 300w psu, because the main problem is current supply on both 12v rails must be fairly high. Reply
  • Trisped - Thursday, August 24, 2006 - link

    The crossfire card is not the same as the normal one. The normal card also has the extra video out options. So there is a reason to buy the one to team up with the other, but only if you need to output to a composite, s-video, or component. Reply
  • JarredWalton - Thursday, August 24, 2006 - link

    See discussion above under the topic "well..." Reply
  • bob4432 - Thursday, August 24, 2006 - link

    why is the x1800xt left out of just about every comparison i have read? for the price you really can't beat it.... Reply
  • araczynski - Thursday, August 24, 2006 - link

    ...I haven't read the article, but i did want to just make a comment...

    having just scored a brand new 7900gtx for $330 shipped, it feels good to be able to see the headlines for articles like this, ignore them, and think "...whew, i won't have to read anymore of these until the second generation of DX10's comes out..."

    I'm guessing nvidia will be skipping the 8000's, and 9000's, and go straight for the 10,000's, to signal the DX10 and 'uber' (in hype) improvements.

    either way, its nice to get out of the rat race for a few years.
    Reply
  • MrJim - Thursday, August 24, 2006 - link

    Why no Anisotropic filtering tests? Or am i blind? Reply
  • DerekWilson - Saturday, August 26, 2006 - link

    yes, all tests are performed with at least 8xAF. Under games that don't allow selection of a specific degree of AF, we choose the highest quality texture filtering option (as in BF2 for instance).

    AF comes at fairly little cost these days, and it just doesn't make sense not to turn on at least 8x. I wouldn't personally want to go any higher without angle independant AF (like the high quality af offered on ATI x1k cards).
    Reply

Log in

Don't have an account? Sign up now