More GDDR5 Technologies: Memory Error Detection & Temperature Compensation

As we previously mentioned, for Cypress AMD’s memory controllers have implemented a greater part of the GDDR5 specification. Beyond gaining the ability to use GDDR5’s power saving abilities, AMD has also been working on implementing features to allow their cards to reach higher memory clock speeds. Chief among these is support for GDDR5’s error detection capabilities.

One of the biggest problems in using a high-speed memory device like GDDR5 is that it requires a bus that’s both fast and fairly wide - properties that generally run counter to each other in designing a device bus. A single GDDR5 memory chip on the 5870 needs to connect to a bus that’s 32 bits wide and runs at base speed of 1.2GHz, which requires a bus that can meeting exceedingly precise tolerances. Adding to the challenge is that for a card like the 5870 with a 256-bit total memory bus, eight of these buses will be required, leading to more noise from adjoining buses and less room to work in.

Because of the difficulty in building such a bus, the memory bus has become the weak point for video cards using GDDR5. The GPU’s memory controller can do more and the memory chips themselves can do more, but the bus can’t keep up.

To combat this, GDDR5 memory controllers can perform basic error detection on both reads and writes by implementing a CRC-8 hash function. With this feature enabled, for each 64-bit data burst an 8-bit cyclic redundancy check hash (CRC-8) is transmitted via a set of four dedicated EDC pins. This CRC is then used to check the contents of the data burst, to determine whether any errors were introduced into the data burst during transmission.

The specific CRC function used in GDDR5 can detect 1-bit and 2-bit errors with 100% accuracy, with that accuracy falling with additional erroneous bits. This is due to the fact that the CRC function used can generate collisions, which means that the CRC of an erroneous data burst could match the proper CRC in an unlikely situation. But as the odds decrease for additional errors, the vast majority of errors should be limited to 1-bit and 2-bit errors.

Should an error be found, the GDDR5 controller will request a retransmission of the faulty data burst, and it will keep doing this until the data burst finally goes through correctly. A retransmission request is also used to re-train the GDDR5 link (once again taking advantage of fast link re-training) to correct any potential link problems brought about by changing environmental conditions. Note that this does not involve changing the clock speed of the GDDR5 (i.e. it does not step down in speed); rather it’s merely reinitializing the link. If the errors are due the bus being outright unable to perfectly handle the requested clock speed, errors will continue to happen and be caught. Keep this in mind as it will be important when we get to overclocking.

Finally, we should also note that this error detection scheme is only for detecting bus errors. Errors in the GDDR5 memory modules or errors in the memory controller will not be detected, so it’s still possible to end up with bad data should either of those two devices malfunction. By the same token this is solely a detection scheme, so there are no error correction abilities. The only way to correct a transmission error is to keep trying until the bus gets it right.

Now in spite of the difficulties in building and operating such a high speed bus, error detection is not necessary for its operation. As AMD was quick to point out to us, cards still need to ship defect-free and not produce any errors. Or in other words, the error detection mechanism is a failsafe mechanism rather than a tool specifically to attain higher memory speeds. Memory supplier Qimonda’s own whitepaper on GDDR5 pitches error correction as a necessary precaution due to the increasing amount of code stored in graphics memory, where a failure can lead to a crash rather than just a bad pixel.

In any case, for normal use the ramifications of using GDDR5’s error detection capabilities should be non-existent. In practice, this is going to lead to more stable cards since memory bus errors have been eliminated, but we don’t know to what degree. The full use of the system to retransmit a data burst would itself be a catch-22 after all – it means an error has occurred when it shouldn’t have.

Like the changes to VRM monitoring, the significant ramifications of this will be felt with overclocking. Overclocking attempts that previously would push the bus too hard and lead to errors now will no longer do so, making higher overclocks possible. However this is a bit of an illusion as retransmissions reduce performance. The scenario laid out to us by AMD is that overclockers who have reached the limits of their card’s memory bus will now see the impact of this as a drop in performance due to retransmissions, rather than crashing or graphical corruption. This means assessing an overclock will require monitoring the performance of a card, along with continuing to look for traditional signs as those will still indicate problems in memory chips and the memory controller itself.

Ideally there would be a more absolute and expedient way to check for errors than looking at overall performance, but at this time AMD doesn’t have a way to deliver error notices. Maybe in the future they will?

Wrapping things up, we have previously discussed fast link re-training as a tool to allow AMD to clock down GDDR5 during idle periods, and as part of a failsafe method to be used with error detection. However it also serves as a tool to enable higher memory speeds through its use in temperature compensation.

Once again due to the high speeds of GDDR5, it’s more sensitive to memory chip temperatures than previous memory technologies were. Under normal circumstances this sensitivity would limit memory speeds, as temperature swings would change the performance of the memory chips enough to make it difficult to maintain a stable link with the memory controller. By monitoring the temperature of the chips and re-training the link when there are significant shifts in temperature, higher memory speeds are made possible by preventing link failures.

And while temperature compensation may not sound complex, that doesn’t mean it’s not important. As we have mentioned a few times now, the biggest bottleneck in memory performance is the bus. The memory chips can go faster; it’s the bus that can’t. So anything that can help maintain a link along these fragile buses becomes an important tool in achieving higher memory speeds.

Lower Idle Power & Better Overcurrent Protection Angle-Independent Anisotropic Filtering At Last
POST A COMMENT

327 Comments

View All Comments

  • Scali - Thursday, October 01, 2009 - link

    Here's a screenshot of my 8800GTS320 getting almost 49 fps when I overclock it:
    http://bohemiq.scali.eu.org/OceanCS8800GTS.png">http://bohemiq.scali.eu.org/OceanCS8800GTS.png

    So you see why I think 47 fps for a GTX285 is weird. It should easily beat the 72 fps of the HD5870. Even an 8800Ultra might get close to that number.
    Reply
  • mapesdhs - Tuesday, September 29, 2009 - link


    I sincerely nope not as we need the competition. See:

    http://www.marketwatch.com/story/does-amd-really-p...">http://www.marketwatch.com/story/does-a...-pose-a-...

    Ian.

    Reply
  • Johnwo - Monday, September 28, 2009 - link

    so wait, can this card play Crysis? Reply
  • vsl2020 - Sunday, September 27, 2009 - link

    AMD only introducing new things which merely would make yur frap fps go 1000 and thats it.....no new good or interesting features such as what nvidia did with physx/3d Stereoscopic or similar that would convince me thats the way to the future...

    why should I need to buy a new dx11gpu only can do 1000fps...I would still luv my 260+ and 60fps in batman arkhum or other games which supported phsyx or similar...AMD just bring us back to the stone age race ..who has the higher fps race......
    Reply
  • Jamahl - Tuesday, September 29, 2009 - link

    did you even read the review? what about eyefinity, you know a good way to use up those 1000fps by adding more screens?

    you can be stuck with your 260, you aren't really gaming unless you are gaming on eyefinity.
    Reply
  • Zool - Monday, September 28, 2009 - link

    Actualy the delaying of nvidia dx11 card will make introducing new things harder. DX11 and OpenCL means enough that u can forget nvidias physx. At least with open platform dewelopers could finaly merge gpu and cpu code and make some more usefull things than improwed water splashing,unrealistic glass shatering and curtains which just run on top of the code and act as some kind of postprocessing + efects just to maintain compatibility.(miles away from the nvidia demos)
    And also dx11 compute shader can make these things.
    Reply
  • RNViper - Sunday, September 27, 2009 - link

    Hey Guys

    Need Eyefinity a Nativ DisplayPort TFT?
    Reply
  • pawaniitr - Sunday, September 27, 2009 - link

    maybe a 2 GiB memory will help this card at highest resolutions
    waiting for that version
    Reply
  • Troll Trolling - Saturday, September 26, 2009 - link

    I think you guys from anandtech could do an article explaining why the new Radeons don't don't double performance, even with doubled specs.
    It happened too with the HD 4870, it had more than doubled everything (except bandwidht, that was 80% higher) and was not close from double performance.
    Reply
  • SiliconDoc - Saturday, September 26, 2009 - link

    PS - The bandwidth is not doubled.

    124GB/sec to 153GB/sec, nowhere near an 80% increasse, let alone, virtually double.
    Reply

Log in

Don't have an account? Sign up now