GM200 - All Graphics, Hold The Double Precision

Before diving into our look at the GTX Titan X itself, I want to spend a bit of time talking about the GM200 GPU. GM200 is a very interesting GPU, and not for the usual reasons. In fact you could say that GM200 is remarkable for just how unremarkable it is.

From a semiconductor manufacturing standpoint we’re still at a standstill on 28nm for at least a little bit longer, pushing 28nm into its 4th year and having all sorts of knock-on effects. We’ve droned on about this for some time now, so we won’t repeat ourselves, but ultimately what it means for consumers is that AMD and NVIDIA have needed to make do with the tools they have, and in lieu of generational jumps in manufacturing have focused on architectural efficiency and wringing out everything they can get out of 28nm.

For NVIDIA those improvements came in the form of the company’s Maxwell architecture, which has made a concentrated effort to focus on energy and architectural efficiency to get the most out of their technology. In assembling GM204 NVIDIA built the true successor to GK104, putting together a pure graphics chip. From a design standpoint NVIDIA spent their energy efficiency gains on growing out GM204’s die size without increasing power, allowing them to go from 294mm2 and 3.5B transistors to 398mm2 and 5.2B transistors. With a larger die and larger transistor budget, NVIDIA was able to greatly increase performance by laying down a larger number of high performance (and relatively larger themselves) Maxwell SMMs.

On the other hand for GM206 and the GTX 960, NVIDIA banked the bulk of their energy savings, building what’s best described as half of a GM204 and leading to a GPU that didn’t offer as huge of a jump in performance from its predecessor (GK106) but also brought power usage down and kept costs in check.


Not Pictured: The 96 FP64 ALUs

But for Big Maxwell, neither option was open to NVIDIA. At 551mm2 GK110 was already a big GPU, so large (33%) increase in die size like with GM204 was not practical. Neither was leaving the die size at roughly the same area and building the Maxwell version of GK110, gaining only limited performance in the process. Instead NVIDIA has taken a 3rd option, and this is what makes GM200 so interesting.

For GM200 NVIDIA’s path of choice has been to divorce graphics from high performance FP64 compute. Big Kepler was a graphics powerhouse in its own right, but it also spent quite a bit of die area on FP64 CUDA cores and some other compute-centric functionality. This allowed NVIDIA to use a single GPU across the entire spectrum – GeForce, Quadro, and Tesla – but it also meant that GK110 was a bit jack-of-all-trades. Consequently when faced with another round of 28nm chips and intent on spending their Maxwell power savings on more graphics resources (ala GM204), NVIDIA built a big graphics GPU. Big Maxwell is not the successor to Big Kepler, but rather it’s a really (really) big version of GM204.

GM200 is 601mm2 of graphics, and this is what makes it remarkable. There are no special compute features here that only Tesla and Quadro users will tap into (save perhaps ECC), rather it really is GM204 with 50% more GPU. This means we’re looking at the same SMMs as on GM204, featuring 128 FP32 CUDA cores per SMM, a 512Kbit register file, and just 4 FP64 ALUs per SMM, leading to a puny native FP64 rate of just 1/32. As a result, all of that space in GK110 occupied by FP64 ALUs and other compute hardware – and NVIDIA won’t reveal quite how much space that was – has been reinvested in FP32 ALUs and other graphics-centric hardware.

NVIDIA Big GPUs
  Die Size Native FP64 Rate
GM200 (Big Maxwell) 601mm2 1/32
GK110 (Big Kepler) 551mm2 1/3
GF110 (Big Fermi) 520mm2 1/2
GT200 (Big Tesla) 576mm2 1/8
G80 484mm2 N/A

It’s this graphics “purification” that has enabled NVIDIA to improve their performance over GK110 by 50% without increasing power consumption and with only a moderate 50mm2 (9%) increase in die size. In fact in putting together GM200, NVIDIA has done something they haven’t done for years. The last flagship GPU from the company to dedicate this little space to FP64 was G80 – heart of the GeForce 8800GTX – which in fact didn’t have any FP64 hardware at all. In other words this is the “purest” flagship graphics GPU in 9 years.

Now to be clear here, when we say GM200 favors graphics we don’t mean exclusively, but rather it favors graphics and its associated FP32 math over FP64 math. GM200 is still a FP32 compute powerhouse, unlike anything else in NVIDIA’s lineup, and we don’t expect it will be matched by anything else from NVIDIA for quite some time. For that reason I wouldn’t be too surprised if we a Tesla card using it aimed at FP32 users such the oil & gas industry – something NVIDIA has done once before with the Tesla K10 – but you won’t be seeing GM200 in the successor to Tesla K40.

This is also why the GTX Titan X is arguably not a prosumer level card like the original GTX Titan. With the GTX Titan NVIDIA shipped it with its full 1/3 rate FP64 enabled, having GTX Titan pull double duty as the company’s consumer graphics flagship while also serving as their entry-level FP64 card. For GTX Titan X however this is not an option since GM200 is not a high performance FP64 GPU, and as a result the card is riding only on its graphics and FP32 compute capabilities. Which for that matter doesn’t mean that NVIDIA won’t also try to pitch it as a high-performance FP32 card for users who don’t need Tesla, but it won’t be the same kind of entry-level compute card like the original GTX Titan was. In other words, GTX Titan X is much more consumer focused than the original GTX Titan.


Tesla K80: The Only GK210 Card

Looking at the broader picture, I’m left to wonder if this is the start of a permanent divorce between graphics/FP32 compute and FP64 compute in the NVIDIA ecosystem. Until recently, NVIDIA has always piggybacked compute on their flagship GPUs as a means of bootstrapping the launch of the Tesla division. By putting compute in their flagship GPU, even if NVIDIA couldn’t sell those GPUs to compute customers they could sell them to GeForce/Quadro graphics customers. This limited the amount of total risk the company faced, as they’d never end up with a bunch of compute GPUs they could never sell.

However in the last 6 months we’ve seen a shift from NVIDIA at both ends of the spectrum. In November we saw the launch of a Tesla K80, a dual-GPU card featuring the GK210 GPU, a reworked version of GK110 that doubled the register file and shared memory sizes for better performance. GK210 would not come to GeForce or Quadro (though in theory it could have), making it the first compute-centric GPU from NVIDIA. And now with the launch of GM200 we have distinct graphics and compute GPUs from NVIDIA.

NVIDIA GPUs By Compute
  GM200 GK210 GK110B
Stream Processors 3072 2880 2880
Memory Bus Width 384-bit 384-bit 384-bit
Register File Size (Per SM) 4 x 64KB 512KB 256KB
Shared Memory /
L1 Cache (Per SM)
96KB + 24KB 128KB 64KB
Transistor Count 8B 7.1B(?) 7.1B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm
Architecture Maxwell Kepler Kepler
Tesla Products None K80 K40

The remaining question at this point is what happens from here. Was this divorce of compute and graphics a temporary action, the result of being stuck on the 28nm process for another generation? Or was it the first generation in a permanent divorce between graphics and compute, and consequently a divorce between GeForce/Quadro and Tesla? Is NVIDIA finally ready to let Tesla stand on its own?

With Pascal NVIDIA could very well build a jack-of-all-trades style GPU once more. However having already divorced graphics and compute for a generation, merging them again would eat up some of the power and die space benefits from going to 16nm FinFET, power and space that NVIDIA would likely want to invest in greater separate improvements in graphics and compute performance. We’ll see what Pascal brings, but I suspect GM200 is the shape of things to come for GeForce and the GTX Titan lineup.

The NVIDIA GeForce GTX Titan X Review Meet The GeForce GTX Titan X
POST A COMMENT

276 Comments

View All Comments

  • Braincruser - Wednesday, March 18, 2015 - link

    The titan was teased 10 days ago... Reply
  • Tunnah - Wednesday, March 18, 2015 - link

    It feels nVidia are just taking the pee out of us now. I was semi-miffed at the 970 controversy, I know for business reasons etc. it doesn't make sense to truly trounce the competition (and your own products) when you can instead hold something back and keep it tighter, and have something to release in case they surprise you.

    And I was semi-miffed when I heard it would be more like a 33% improvement over the current cream of the crop, instead of the closer to 50% increase the Titan was over the 680, because they have to worry about the 390x, and leave room for a Titan X White Y Grey SuperHappyTime version.

    But to still charge $1000 even though they are keeping the DP performance low, this is just too far. The whole reasoning for the high price tag was you were getting a card that was not only a beast of a gaming card, but it would hold its own as a workstation card too, as long as you didn't need the full Quadro service. Now it is nothing more than a high end card, a halo product...that isn't actually that good!

    When it comes down to it, you're paying 250% the cost for 33% more performance, and that is disgusting. Don't even bring RAM into it, it's not only super cheap and in no way a justification for the cost, but in fact is useless, because NO GAMER WILL EVER NEED THAT MUCH, IT WAS THE FLIM FLAMMING WORKSTATION CROWD WHO NEEDING THAT FLIM FLAMMING AMOUNT OF FLOOMING RAM YOU FLUPPERS!

    This feels like a big juicy gob of spit in our faces. I know most people bought these purely for the gaming option and didn't use the DP capability, but that's not the point - it was WORTH the $999 price tag. This simply is not, not in the slightest. $650, $750 tops because it's the best, after all..but $999 ? Not in this lifetime.

    I've not had an AMD card since way back in the days of ATi, I am well and truly part of the nVidia crowd, even when they had a better card I'd wait for the green team reply. But this is actually insulting to consumers.

    I was never gonna buy one of these, I was waiting on the 980Ti for the 384bit bus and the bumps that come along with it...but now I'm not only hoping the 390x is better than people say because then nVidia will have to make it extra good..I'm hoping it's better than they say so I can actually buy it.

    For shame nVidia, what you're doing with this card is unforgivable
    Reply
  • Michael Bay - Wednesday, March 18, 2015 - link

    So you`re blaming a for-profit company for being for-profit. Reply
  • maximumGPU - Wednesday, March 18, 2015 - link

    no he's not. He's blaming a for-profit compaby abusing it's position at the expense of its customers.
    Maxwell is great, and i've got 2 of them in my rig. But titan X is a bit of a joke. The only justification the previous titan had was that it could be viewed as a cheap professional cards. Now that's gone but you're still paying the same price.
    Unfortunately nvidia will put the highest price they can get away with, and 999$ doesn't seem to deter some hardcore fans no matter how much poor value it represents.
    I certainly hope the sales don't meet their expectations.
    Reply
  • TheinsanegamerN - Wednesday, March 18, 2015 - link

    I would argue that the vram may be needed later on. 4GB is already tight with SoM, and future games will only push that up.
    people said that 6GB was too much for the OG titan, but SoM can eat that up at 4k, and other games are not far behind. especially for SLI setups, that memory will come in handy.
    Thats what really killed the 770. gpu was fine for me, but 2GB was way to little vram.
    Reply
  • Tal Greywolf - Wednesday, March 18, 2015 - link

    Not being a gamer, I would like to see a review in which many of these top-of-the-line gaming cards are tested against a different sort of environment. For example, I'd love to see how cards compare handling graphics software packages such as Photoshop, Premier Pro, Lightwave, Cinema 4D, SolidWorks and others. If these cards are really pushing the envelope, then they should compare against the Quadro and FirePro lines. Reply
  • Ranger101 - Wednesday, March 18, 2015 - link

    I think it's safe to say that Nvidia make technically superior cards as compared to AMD,
    at least as far as the last 2 generations of GPUs are concerned. While the AMD cards consume
    more power and produce more heat, this issue is not a determining factor when I upgrade unlike
    price and choice.

    I will not buy this card, despite the fact that I find it to be a very desirable and
    techically impressive card, because I don't like being price-raped and because
    I want AMD to be competitive.

    I will buy the 390X because I prefer a "consumer wins" situation where there are at least 2
    companies producing competitive products and lets be clear AMD GPUs are competitve, even when you factor in what is ultimately a small increase in heat and noise, not to mention lower prices.

    It was a pleasant surprise to see the R295X2 at one point described as "very impressive" yet
    I think it would have been fair if Ryan had drawn more attention to AMD "wins," even though they
    are not particularly significant, such as the most stressful Shadow of Mordor benchmarks.

    Most people favour a particular brand, but surely even the most ardent supporters wouldn't want to see a situation where there is ONLY Intel and ONLY Nvidia. We are reaping the rewards of this scenario already in terms of successive generations of Intel CPUs offering performance improvements that are mediocre at best.

    I can only hope that the 390X gets a positive review at Anandtech.
    Reply
  • Mystichobo - Wednesday, March 18, 2015 - link

    Looking forward to a 390 with the same performance for 400-500. I certainly got my money's worth out of the r9 290 when it was released. Don't understand how anyone could advocate this $1000 single card price bracket created for "top tier". Reply
  • Geforce man - Wednesday, March 18, 2015 - link

    What still frustrates me, is the lack of using a modern aftermarket r9 290/x. Reply
  • Crunchy005 - Wednesday, March 18, 2015 - link

    I actually really like how the new titan looks, shows what can be done. The problem with this card at this price point is it defeats what the titan really should be. Without the couple precision performance this card becomes irrelevant I feel(overpriced gaming card). The original titan was an entry level compute card outside of the quadro lineup. I know there are drawbacks to multiGPU setups but I would go for 2 980's or 970's for same or less money than the Titan X.

    I also found these benchmarks very interesting because you can see how much each game can be biased to a certain card. AMDs 290x, an old card, beat out the 980 in some cases, mostly at 4k resolutions and lost in others at the same resolution. Just goes to show that you also have to look at individual game performance as well as overall performance when buying a card.

    Can't wait for the 390x from AMD that should be very interesting.
    Reply

Log in

Don't have an account? Sign up now