That Darn Memory Bus

Among the entire GTX 600 family, the GTX 660 Ti’s one unique feature is its memory controller layout. NVIDIA built GK104 with 4 memory controllers, each 64 bits wide, giving the entire GPU a combined memory bus width of 256 bits. These memory controllers are tied into the ROPs and L2 cache, with each controller forming part of a ROP partition containing 8 ROPs (or rather 1 ROP unit capable of processing 8 operations), 128KB of L2 cache, and the memory controller. To disable any of those things means taking out a whole ROP partition, which is exactly what NVIDIA has done.

The impact on the ROPs and the L2 cache is rather straightforward – render operation throughput is reduced by 25% and there’s 25% less L2 cache to store data in – but the loss of the memory controller is a much tougher concept to deal with. This goes for both NVIDIA on the design end and for consumers on the usage end.

256 is a nice power-of-two number. For video cards with power-of-two memory bus widths, it’s very easy to equip them with a similarly power-of-two memory capacity such as 1GB, 2GB, or 4GB of memory. For various minor technical reasons (mostly the sanity of the engineers), GPU manufacturers like sticking to power-of-two memory busses. And while this is by no means a true design constraint in video card manufacturing, there are ramifications for skipping from it.

The biggest consequence of deviating from a power-of-two memory bus is that under normal circumstances this leads to a card’s memory capacity not lining up with the bulk of the cards on the market. To use the GTX 500 series as an example, NVIDIA had 1.5GB of memory on the GTX 580 at a time when the common Radeon HD 5870 had 1GB, giving NVIDIA a 512MB advantage. Later on however the common Radeon HD 6970 had 2GB of memory, leaving NVIDIA behind by 512MB. This also had one additional consequence for NVIDIA: they needed 12 memory chips where AMD needed 8, which generally inflates the bill of materials more than the price of higher speed memory in a narrower design does. This ended up not being a problem for the GTX 580 since 1.5GB was still plenty of memory for 2010/2011 and the high pricetag could easily absorb the BoM hit, but this is not always the case.

Because NVIDIA has disabled a ROP partition on GK104 in order to make the GTX 660 Ti, they’re dropping from a power-of-two 256bit bus to an off-size 192bit bus. Under normal circumstances this means that they’d need to either reduce the amount of memory on the card from 2GB to 1.5GB, or double it to 3GB. The former is undesirable for competitive reasons (AMD has 2GB cards below the 660 Ti and 3GB cards above) not to mention the fact that 1.5GB is too small for a $300 card in 2012. The latter on the other hand incurs the BoM hit as NVIDIA moves from 8 memory chips to 12 memory chips, a scenario that the lower margin GTX 660 Ti can’t as easily absorb, not to mention how silly it would be for a GTX 680 to have less memory than a GTX 660 Ti.

Rather than take the usual route NVIDIA is going to take their own 3rd route: put 2GB of memory on the GTX 660 Ti anyhow. By putting more memory on one controller than the other two – in effect breaking the symmetry of the memory banks – NVIDIA can have 2GB of memory attached to a 192bit memory bus. This is a technique that NVIDIA has had available to them for quite some time, but it’s also something they rarely pull out and only use it when necessary.

We were first introduced to this technique with the GTX 550 Ti in 2011, which had a similarly large 192bit memory bus. By using a mix of 2Gb and 1Gb modules, NVIDIA could outfit the card with 1GB of memory rather than the 1.5GB/768MB that a 192bit memory bus would typically dictate.

For the GTX 660 Ti in 2012 NVIDIA is once again going to use their asymmetrical memory technique in order to outfit the GTX 660 Ti with 2GB of memory on a 192bit bus, but they’re going to be implementing it slightly differently. Whereas the GTX 550 Ti mixed memory chip density in order to get 1GB out of 6 chips, the GTX 660 Ti will mix up the number of chips attached to each controller in order to get 2GB out of 8 chips. Specifically, there will be 4 chips instead of 2 attached to one of the memory controllers, while the other controllers will continue to have 2 chips. By doing it in this manner, this allows NVIDIA to use the same Hynix 2Gb chips they already use in the rest of the GTX 600 series, with the only high-level difference being the width of the bus connecting them.

Of course at a low-level it’s more complex than that. In a symmetrical design with an equal amount of RAM on each controller it’s rather easy to interleave memory operations across all of the controllers, which maximizes performance of the memory subsystem as a whole. However complete interleaving requires that kind of a symmetrical design, which means it’s not quite suitable for use on NVIDIA’s asymmetrical memory designs. Instead NVIDIA must start playing tricks. And when tricks are involved, there’s always a downside.

The best case scenario is always going to be that the entire 192bit bus is in use by interleaving a memory operation across all 3 controllers, giving the card 144GB/sec of memory bandwidth (192bit * 6GHz / 8). But that can only be done at up to 1.5GB of memory; the final 512MB of memory is attached to a single memory controller. This invokes the worst case scenario, where only 1 64-bit memory controller is in use and thereby reducing memory bandwidth to a much more modest 48GB/sec.

How NVIDIA spreads out memory accesses will have a great deal of impact on when we hit these scenarios. In the past we’ve tried to divine how NVIDIA is accomplishing this, but even with the compute capability of CUDA memory appears to be too far abstracted for us to test any specific theories. And because NVIDIA is continuing to label the internal details of their memory bus a competitive advantage, they’re unwilling to share the details of its operation with us. Thus we’re largely dealing with a black box here, one where poking and prodding doesn’t produce much in the way of meaningful results.

As with the GTX 550 Ti, all we can really say at this time is that the performance we get in our benchmarks is the performance we get. Our best guess remains that NVIDIA is interleaving the lower 1.5GB of address while pushing the last 512MB of address space into the larger memory bank, but we don’t have any hard data to back it up. For most users this shouldn’t be a problem (especially since GK104 is so wishy-washy at compute), but it remains that there’s always a downside to an asymmetrical memory design. With any luck one day we’ll find that downside and be able to better understand the GTX 660 Ti’s performance in the process.

The GeForce GTX 660 Ti Review Meet The EVGA GeForce GTX 660 Ti Superclocked
Comments Locked

313 Comments

View All Comments

  • TheJian - Friday, August 24, 2012 - link

    Pity I hadn't dug a bit further and found this also...I just checked 3 sites and used them...LOL.

    Even crysis 2 is a wash ~1fps difference 1920x1080 and above again we see below 30min fps even on 7950B. It takes the 7970 to do 30 and it won't be there all day. It will likely dip under 30. Ryan comments a few times your experience won't be great below 60 as those will dip :)

    The 7950 or B rises with volts and a lot of them have a hard time hitting over 1150 and run 80watts more. Not good if that's how you have to clock your card to keep up (or even win...it's bad either way). The one at guru3d.com was a regular 7950 that those #'s came from so it will have a hard time beating a 1300mhz much on NV's side. Memory can hit 7.71 as shown at hardocp with ONE sample. Must be pretty easy. Memory won't be an issue at 1920x1200 or even less at 1920x1080 and you can OC the mem any time you like :) Interesting article.

    Again, 1322/6.7ghz on mem. Above Zotac Amp in both cases. Easy to hit 1300 I guess ;) and it still won't be as hot/noisy or use as many watts at those levels. Not that I'd run either card at max. They're all great cards, it's a consumers dream right now, but NV just seems to be in better position and Ryan's comments were just out of touch with reality.
  • CeriseCogburn - Saturday, August 25, 2012 - link

    Well as far as the overclocking that's almost all amd people were left with since the 600 series nVidia released.
    All the old whines were gone - except a sort of memory whine. That gets proven absolutely worthless, but it never ends anyway.
    Amd does not support their cards with drivers properly like nvida, that's just a fact they cannot get away from, no matter how many people claim it's a thing of the past it comes up every single launch, and then continues - that INCLUDES this current / latest amd card released.
    So... it's not a thing of the past.
    No matter how many amd liars say so, they're lying.
  • CeriseCogburn - Saturday, August 25, 2012 - link

    I saw this when their article hit but here is a good laugh... after the you know who fans found it so much fun to attack nVidia about " rejected chips" that couldn't make the cut, look what those criticizers got from their mad amd masters !
    " These numbers paint an interesting picture, albeit not one that is particularly rosy. For the 7970 AMD was already working with top bin Tahiti GPUs, so to make a 7970GE they just needed to apply a bit more voltage and call it a day. The 7950 on the other hand is largely composed of salvaged GPUs that failed to meet 7970 specifications. GPUs that failed due to damaged units aren’t such a big problem here, but GPUs that failed to meet clockspeed targets are another matter. As a result of the fact that AMD is working with salvaged GPUs, AMD has to apply a lot more voltage to a 7950 to guarantee that those poorly clocking GPUs will correctly hit the 925MHz boost clock. "
    ha
    ROFLMHO - oh that great, great, great 40% overclocker needs LOTS OF EXTRA VOLTAGE - TO HIT 925 mhz ..
    LOL
    http://www.anandtech.com/show/6152/amd-announces-n...
    Oh man you can't even make this stuff up !
    HAHAHAHAHHAHAHAHAHAAAAAaaaa
  • Ambilogy - Saturday, August 25, 2012 - link

    Oh you were comparing it to the 7950? I was promoting the 7870 :) in the spanish forums they did they own kind of review because they don't trust this kind of page reviews and the OC 7870 of a member performs better than the OC 660TI.

    So if we talk about the 7950

    The winner is clear, the 7950 wins, you are all facts well deal with this:
    Techpowerup did the most quantity of games, also reviewed the 660TI in 4 diferent reviews for each edition, you can talk all you want nvidia fanboys but techpowerup showed that for your 1080p, 7950 is 5% slower than 660TI, but then w1zzard himself has a post in the forum that you have to suppose a 5% increase in performance for 7950 for the boost he did not include. Which yields equal performance at average, not only that but tom's hardware shows something you have forgotten, minimum FPS rendered in the games, which shows 660TI horrible minimum FPS that indicate a very unstable card, my guess is your god card has very high highs for the good GPU core but when things get demanding the memory bandwidth can't keep the pace, inducing some kind of lag segments.

    It's easy, if they render the same performance average in games with almost the same price, the card that wins is the one with the better features: That is GPGPU, frame stability and overclock, which is by far much more important than closed source Physx for 2 games every hundred years. Why? OpenCL is starting to get used more and more, and it's showing awesome results. Why does nvidia cards sell more? well they still tell the reviewers how to review the card to make it look nice, they made a huge hype of their products and they have a huge fanbase that cannot see:

    1- Nvidia is selling chips which only look good today so they have faster obsolescence and therefore they can sell their next series better.
    2- They are completely oblivious to the fact that they see amd cards with a non objective point of view.
    3- Proofs of equally performing amd cards with more OC rom is ussuallly defended by them talking about the past and attacking the so called amd fanboys as follows:

    "REALLY IS CRAPPY CHIPS from the low end loser harvest they had to OVER VOLT to get to their boost...

    LOL
    LOL\
    OLO
    I mean there it is man - the same JUNK amd fanboys always use to attack nVida talking about rejected chips for lower clocked down the line variants has NOW COME TRUE IN FULL BLOWN REALITY FOR AMD....~!
    HAHHAHAHAHAHAHAHHAHA
    AHHAHAHAHAHAA
    omg !
    hahahahahahhahaha
    ahhahaha
    ahaha
    Holy moly. hahahahahhahahha"

    Telling chips are bad without using them, manipulating info showing reviews that favor nvidia, exaggerating features that are not so important, ignoring some that are.

    Explain to me how what I quoted (in example) changes the fact that I can go and buy a 7950 with pre OC and have same performance in average due to w1zz studies, and even OC more and forget about 660TI. Explain to me, how that overly exaggerated laugh changes the minimum frame rates of the TI and makes it good for no reason. Well It doesn't change anything actually.

    The only cure I see for you fan-guys is get a 7950 and OC it, or buy a good version already, then you would stop complaining the moment you see its not a bad card. And also get the 660TI so you can compare also. You will see no difference that could make you still think AMD cards are crap, you will not see the driver issues, you will notice that physx don't make the difference, and hopefully you will be a more balanced person.

    I'm not a fanboy, I like nvidia cards, I have had a couple, and to me the 670 is a great card, but not this 660TI crap, I'm not a fanboy because I know to see when a company makes a meh release.
  • CeriseCogburn - Sunday, August 26, 2012 - link

    I've already had better, so you assume far too much, and of course, are a fool. YOU need to go get the card and see the driver problems, PERSONALLY, instead of talking about two other people on some forum...
    Get some personal experience.
    NEXT: Check out the Civ 5 COMPUTE Perf above - this site has the 6970 going up 6+fps while the GTX570 goes down 30 fps... from the former bench...
    http://www.anandtech.com/show/4061/amds-radeon-hd-...
    LOLOL
    No bias here.....
    The 580 that was left out of this review for COMPUTE scored EQUIVALENT to the 7970, 265.7 fps December of 2010.
    So you want to explain how the 570 goes down, the 580 is left out, and the amd card rises ?
    Yeah, see.... there ya go and famboy - enjoy the cheatie lies.
  • Cliffro - Saturday, September 1, 2012 - link

    The comment section is filled with delusional fanboys from both camps.

    To the Nvidia fanboys, the 600 series is great when you get a working card, that doesn't just randomly start losing performance and then eventually refuse to work at all. Doesn't Red Screen of Death. or get constant "Driver Stopped Responding" errors etc etc. No review mentions these issues.

    To the AMD Fanboys, the drivers really do suck, the grey screen of death issue is/was a pain, card not responding after turning off the monitors after being idle for however long also sounds like a PITA. Again no review has ever mention these issues.

    I've been using Nvidia the majority of my time gaming, and have used ATI/AMD as well though. Neither one is perfect, both have moments where they just plain SUCK ASS!

    I'm currently using 2 GTX 560 Ti's and am currently considering up/sidegrading to a single 670/680 or 7970/7950, and during my research I've read horror stories about both the 600 series and the 7000 series. What's funny is everyone ALWAYS says look at the reviews, none of which mention the failures from both camps. none speak of reliability of the cards, because they have them and test them in what a week's time period at most?

    Here's a good example, one of the fastest 670's was the Asus 670 DCII Top, it got rave reviews, but got horrible user reviews because of reliability issues, got discontinued, and is no longer available at Newegg.

    I can see why EVGA dropped their lifetime warranty.

    All of this said, I'm actually leaning towards AMD this round, sure they have issues and even outright failures but they aren't as prominent as the ones I'm reading about from Nvidia. I don't like feeling like I'm playing the lottery when buying a video card, and with the 600 series from Nvidia that's the feeling I'm getting.
  • Cliffro - Saturday, September 1, 2012 - link

    I forgot to say YMMV at the end there.
  • CeriseCogburn - Monday, September 3, 2012 - link

    Right at the newegg 680 card you cherry picked for problems..

    "Cons: More expensive than the other 670's

    Other Thoughts: This card at stock settings will beat a stock GTX680 at stock settings in most games. I think this is the best deal for a video card at the moment.

    I sold my 7970 and bought this as AMD's drivers are so bad right now. Anytime your computer sleeps it will crash, and I was experiencing blue screens in some games. I switched from 6970's in crossfire to the 7970 and wished I had my 6970's back because of the driver issues. This card however has been perfect so far and runs much much cooler than my 6970's! They would heat my office up 20 degrees!

    I also have a 7770 in my HTPC and am experiencing driver issues with it as well. AMD really needs to get there act together with their driver releases! "

    LOL - and I'm sure there isn't an amd model design that has been broken for a lot of purchasers....
    Sure....
    One card, and "others here are rabid fanboys" - well if so, you're a rabid idiot.
  • mrfunk10 - Thursday, September 6, 2012 - link

    lol you've gotta be one of the most ridiculous, blind, hard-headed fanboy troll noobs i've ever seen on the internet. The amd 7 series atm are great cards and at $300 for the 7950 i'm sure they make nvidia sweat. I myself am running a gigabyte windforce 660ti an am very happy with it but mygod can the 79's oc.
  • Cliffro - Saturday, September 8, 2012 - link

    "One card, and "others here are rabid fanboys" - well if so, you're a rabid idiot. "

    Have you not noticed your own constant posting of Pro Nvidia statements, and at the same time bashing AMD? And I said delusional not rabid. Though you may be on to something with that.....

    EVGA recalled a lot of 670 SC's, gave out FTW models(680 PCB) as replacements. Something about a "bad batch".

    Maybe it's an partner problem, maybe it's an Nvidia problem I don't know. But I know Asus DCII cards have lots of low ratings regardless if it's AMD or Nvidia. The Asus 79xx cards with DCII have 3 eggs or less overall, similar to the 6xx series from them. Gigabyte has better ratings, and less negatives than Asus, MSI and even EVGA on some models. So maybe it is a partner problem.

    I also must be imagining my Nvidia TDR errors or drivers/cards crashing (with no recovery) while playing a simple game (Bejeweled 3, yeah I know...) and other games occasionally as well since Nvidia can do nothing wrong in the driver department right? Just like my AMD friend seemed to think I was imagining my AMD driver issues when I had my HD 2900 Pro.

    It's also funny that I'm being attacked by a "Devoted Nvidia fan", and my friends usually consider me a "Devoted Nvidia fan". Go figure. I've never been totally against any company, never anti-Intel or AMD, or Nvidia or ATI/AMD. The only company I have avoided is Hitatchi and their hard drives, and Intel initially because honestly their stuff seemed overpriced during the P4 days.

    Maybe I'm just getting cynical as I get older...but Hard Drives started becoming unreliable the last couple of years, and now video cards are suffering more failures than I'm used to seeing. And SSD's with Sandforce seem to suck ass as well reliability wise, they are almost comparable with the 600 series, high speed and more failures than I'm comfortable with. Though in Nvidia's defense even the 600 series isn't as bad as Sandforce or OCZ or Seagate.

Log in

Don't have an account? Sign up now