More GDDR5 Technologies: Memory Error Detection & Temperature Compensation

As we previously mentioned, for Cypress AMD’s memory controllers have implemented a greater part of the GDDR5 specification. Beyond gaining the ability to use GDDR5’s power saving abilities, AMD has also been working on implementing features to allow their cards to reach higher memory clock speeds. Chief among these is support for GDDR5’s error detection capabilities.

One of the biggest problems in using a high-speed memory device like GDDR5 is that it requires a bus that’s both fast and fairly wide - properties that generally run counter to each other in designing a device bus. A single GDDR5 memory chip on the 5870 needs to connect to a bus that’s 32 bits wide and runs at base speed of 1.2GHz, which requires a bus that can meeting exceedingly precise tolerances. Adding to the challenge is that for a card like the 5870 with a 256-bit total memory bus, eight of these buses will be required, leading to more noise from adjoining buses and less room to work in.

Because of the difficulty in building such a bus, the memory bus has become the weak point for video cards using GDDR5. The GPU’s memory controller can do more and the memory chips themselves can do more, but the bus can’t keep up.

To combat this, GDDR5 memory controllers can perform basic error detection on both reads and writes by implementing a CRC-8 hash function. With this feature enabled, for each 64-bit data burst an 8-bit cyclic redundancy check hash (CRC-8) is transmitted via a set of four dedicated EDC pins. This CRC is then used to check the contents of the data burst, to determine whether any errors were introduced into the data burst during transmission.

The specific CRC function used in GDDR5 can detect 1-bit and 2-bit errors with 100% accuracy, with that accuracy falling with additional erroneous bits. This is due to the fact that the CRC function used can generate collisions, which means that the CRC of an erroneous data burst could match the proper CRC in an unlikely situation. But as the odds decrease for additional errors, the vast majority of errors should be limited to 1-bit and 2-bit errors.

Should an error be found, the GDDR5 controller will request a retransmission of the faulty data burst, and it will keep doing this until the data burst finally goes through correctly. A retransmission request is also used to re-train the GDDR5 link (once again taking advantage of fast link re-training) to correct any potential link problems brought about by changing environmental conditions. Note that this does not involve changing the clock speed of the GDDR5 (i.e. it does not step down in speed); rather it’s merely reinitializing the link. If the errors are due the bus being outright unable to perfectly handle the requested clock speed, errors will continue to happen and be caught. Keep this in mind as it will be important when we get to overclocking.

Finally, we should also note that this error detection scheme is only for detecting bus errors. Errors in the GDDR5 memory modules or errors in the memory controller will not be detected, so it’s still possible to end up with bad data should either of those two devices malfunction. By the same token this is solely a detection scheme, so there are no error correction abilities. The only way to correct a transmission error is to keep trying until the bus gets it right.

Now in spite of the difficulties in building and operating such a high speed bus, error detection is not necessary for its operation. As AMD was quick to point out to us, cards still need to ship defect-free and not produce any errors. Or in other words, the error detection mechanism is a failsafe mechanism rather than a tool specifically to attain higher memory speeds. Memory supplier Qimonda’s own whitepaper on GDDR5 pitches error correction as a necessary precaution due to the increasing amount of code stored in graphics memory, where a failure can lead to a crash rather than just a bad pixel.

In any case, for normal use the ramifications of using GDDR5’s error detection capabilities should be non-existent. In practice, this is going to lead to more stable cards since memory bus errors have been eliminated, but we don’t know to what degree. The full use of the system to retransmit a data burst would itself be a catch-22 after all – it means an error has occurred when it shouldn’t have.

Like the changes to VRM monitoring, the significant ramifications of this will be felt with overclocking. Overclocking attempts that previously would push the bus too hard and lead to errors now will no longer do so, making higher overclocks possible. However this is a bit of an illusion as retransmissions reduce performance. The scenario laid out to us by AMD is that overclockers who have reached the limits of their card’s memory bus will now see the impact of this as a drop in performance due to retransmissions, rather than crashing or graphical corruption. This means assessing an overclock will require monitoring the performance of a card, along with continuing to look for traditional signs as those will still indicate problems in memory chips and the memory controller itself.

Ideally there would be a more absolute and expedient way to check for errors than looking at overall performance, but at this time AMD doesn’t have a way to deliver error notices. Maybe in the future they will?

Wrapping things up, we have previously discussed fast link re-training as a tool to allow AMD to clock down GDDR5 during idle periods, and as part of a failsafe method to be used with error detection. However it also serves as a tool to enable higher memory speeds through its use in temperature compensation.

Once again due to the high speeds of GDDR5, it’s more sensitive to memory chip temperatures than previous memory technologies were. Under normal circumstances this sensitivity would limit memory speeds, as temperature swings would change the performance of the memory chips enough to make it difficult to maintain a stable link with the memory controller. By monitoring the temperature of the chips and re-training the link when there are significant shifts in temperature, higher memory speeds are made possible by preventing link failures.

And while temperature compensation may not sound complex, that doesn’t mean it’s not important. As we have mentioned a few times now, the biggest bottleneck in memory performance is the bus. The memory chips can go faster; it’s the bus that can’t. So anything that can help maintain a link along these fragile buses becomes an important tool in achieving higher memory speeds.

Lower Idle Power & Better Overcurrent Protection Angle-Independent Anisotropic Filtering At Last
POST A COMMENT

327 Comments

View All Comments

  • mapesdhs - Saturday, September 26, 2009 - link


    > That is quite all right, you fellas make sure to read it all, ...

    But that's the thing S.D., I pretty much don't read any of it. :D (does
    anyone?) First sentence only, then move on.

    Ian.

    Reply
  • SiliconDoc - Monday, September 28, 2009 - link

    Oh, ha ha, another lowlife smart aleck.

    One has to wonder if you do as you say, and only read the first sentence, and move on, why you would care what I've typed, since you cannot imagine anyone does anything different. Heck you shouldn't even notice this, right liar ?

    Yes, another liar, not amazing, not at all.

    No need to modify or delete the sentence prior to this JaredWalton, smarty pants insulter won't read it, but I'm sure you can't resist, for "convenience's" sake of course.

    Oh, I don't have to bring anything up on topic at all, because neither did lowlife skum not reading, he just got his nose awfully browner.
    Reply
  • JarredWalton - Friday, September 25, 2009 - link

    Very happy to have everyone here convinced you don't know what you're talking about? That's the only "truth" you've brought to this party. Marketing generally wants reputable people to promote a product - the "every man" approach. Funny that we don't see crazy people espousing products on TV (well, excepting stuff like Sham Wow!)

    Being crazy like you are in this thread only cements your status as someone who doesn't have a firm grip on reality - someone that can't be trusted. Thanks again for clearing that up so thoroughly.

    I am very happy about it as well! :-D
    Reply
  • erple2 - Tuesday, September 29, 2009 - link

    Yeah, but that "Sham Wow" product works like a freakin' charm...

    http://www.popularmechanics.com/blogs/home_journal...">http://www.popularmechanics.com/blogs/home_journal...
    Reply
  • SiliconDoc - Wednesday, September 30, 2009 - link

    Im' sure you spend your time drooling in front of a TV after you spank your joystick for fps, so know all about wacky commercials you have memorized, and besides, it's a pathetic, all you have left insult, off topic, who cares, pure hatred, no real response, and the 5870 double epic fail IS THE HOTTEST ATI CARD OF ALL TIME! Reply
  • erple2 - Wednesday, September 30, 2009 - link

    What's with the personal attacks? Does that mean that you concede defeat?

    Meh, you've no more credibility. Chill out.
    Reply
  • SiliconDoc - Friday, September 25, 2009 - link

    Yeah, now down to insults, since you lost everything else.

    Let's have your claimed specialty outlined here in context, let's have you come clean on LAPTOP GRAPHICS, and spread the truth about how NVIDIA is so far ahead and has been for quite some time, that it's a JOKE to buy a gaming laptop with ATI graphics on board.
    Come on mmister!
    --
    Now that is REALLY FUNNY ! You grabbed your arrogant unscrupulous self and proclaimed your fairness, but picked a spot where ati is completely EPIC FAIL, and NVIDIA is 1000% the only way to go, PERIOD, and left that MAJOR slap in the face high and dry.
    --
    Great job, yeah, you're the "sane one".
    LOL
    Reply
  • dieselcat18 - Saturday, October 03, 2009 - link

    Nvidia fan-boy, troll, loser....take your gforce cards and go home...we can now all see how terrible ATi is thanks you ...so I really don't understand why people are beating down their doors for the 5800 series, just like people did for the 4800 and 3800 cards. I guess Nvidia fan-boy trolls like you have only one thing left to do and that's complain and cry like the itty-bitty babies that some of you are about the competition that's beating you like a drum.....so you just wait for your 300 series cards to be released (can't wait to see how many of those are available) so you can pay the overpriced premiums that Nvidia will be charging AGAIN !...hahaha...just like all that re-badging BS they pulled with the 9800 and 200 cards...what a joke !.. Oh my, I must say you have me in a mood and the ironic thing is I do like Nvidia as much as ATi, I currently own and use both. I just can't stand fools like you who spout nothing but mindless crap while waving your team flag (my card is better than your's..WhaaWhaaWhaa)...just take yourself along with your worthless opinions and slide back under that slimly rock you came from. Reply
  • JarredWalton - Friday, September 25, 2009 - link

    You've been insulting in this whole thread, so don't go crying to mamma about someone pointing that out. I did go and delete the posts from the person calling you gay and suggesting you should die in various ways, because as bad as you've been you haven't stooped quite that low (yet).

    Laptop issues with ATI... you mean http://www.anandtech.com/mobile/showdoc.aspx?i=356...">like this. Granted, I gave them a chance to address the issues. They failed and my full article on the various Clevo high-end notebooks will make it quite clear how far ahead NVIDIA is in the mobile sector right now.

    "Fair" is treating both sides objectively. ATI has major problems with getting updated graphics drivers out on mobile products, and that's horrible. On the desktop, they don't have such issues for the most part. Yeah, you might have to wait a month or so for a driver update to fix the latest hot release and add CrossFire support... but you have to do the exact same thing for NVIDIA with about the same frequency. Only SLI and CF setups really need the regular driver updates, and in many cases the latest 18x and 19x NVIDIA drivers are slower than 16x and 17x on games that are older than six months.

    Fair is also looking at these results and saying, "gee, I can get a 5870 for $400 (or $360 if you wait a few weeks for supply to bolster up), and that same card has no CrossFire or SLI wonkiness and costs less than the GTX 295 and 4870X2. Okay, 4870X2 and GTX 295 beat it in raw performance in some cases, but I don't think there's a single game where you can say one HD 5870 offers less than acceptable performance at 2560x1600, and I can guarantee there are titles that still have issues with SLI and CrossFire. (Yeah, you need to turn down some details in Crysis to get acceptable performance, but that's true of anything other than the top SLI and CF configs.) I would be more than happy to give up a bit of performance to avoid dealing with the whole multi-GPU ordeal. Why don't you tell us how innovative and awesome tri-SLI and quad-SLI are while you're at it?

    At present, you have contributed more than 20% of the comments to this article, and not a single one has been anything but trolling. Screaming and yelling, insulting others, lying and making stuff up, all in support of a company that is just like any other big company. We don't ban accounts often, but you've more than warranted such action.
    Reply
  • SiliconDoc - Friday, September 25, 2009 - link

    I think it is more than absolutely clear, that in fact, I said my peace, my first post, and was absolutely attacked. I didn't attack, I got attacked, and in fact you have done plenty of attacking as well.
    I have also provided links, to back up my assertions and counter arguments, added the text for easy viewing, and pointed out in very specific detail whay issues with bias I had and why.
    --
    Now you've claimed "all I've done is post FUD".
    It is nothing short of amazing for you to even posit that, however I can certainly understand anyone pointing out the obvious bias problems (in the article no less) is "on thin ice", and after getting attacked, is solely blamed for "no facts".
    ---
    I certainly won't disagree that the 5870 is a good value as appearing if especially if you don't like to deal with 2 cards or 2 cores.
    But my posts never claimed otherwise. I first claimed it was not as good as wanted, was disappointing, and therefore was not the end of what ati had in store.
    Since I have posted on the 5890, which will in fact be 512 bit.
    Now, you don't like losing your points, or someone adept enough, smart enough, and accurate enough to counter them.
    Sorry about that, and sorry that I won't just lay down, as more heaps are shovelled my way.
    You skip my actual points, and go some other tangent.
    1. PhysX is an advantage and best implementation so far.
    your response: "It sucks because only 2 games ar available"
    ---
    Is that correct for you to do ? Is it not the very best so far ? Yes, it is in fact.
    I have remained factual and reasonable, and glad enough to throw back when I'm attacked.
    But the fact remains, I have made absolutely solid 100% poijnts no matter how many times you claim " lying and making stuff up, "
    ---
    Yet of course, what I just said about NVIDIA and laptp chips, you agreed with. So accoring to your own characterization (quite unfair), all you do is scream and lie, too.
    Just wonderful.
    The GT300 is going to blow this 5870 away - the stats themselves show it, and betting otherwise is a bad joke, and if sense is still about, you know it as well.
    Reply

Log in

Don't have an account? Sign up now