Maxwell 2 Architecture: Introducing GM204

Now that we’ve had a chance to recap Maxwell 1 and what went into that architecture, let’s talk about the first of the second generation Maxwell GPUs, the GM204.

GM204 may be a second generation Maxwell part, but it is without question still a Maxwell part. Maxwell has learned some new tricks that we are going to cover here, but functionally speaking you can consider GM204 to be a bigger version of GM107, taking more SMMs and more ROP/memory partitions and using them to build a bigger, more powerful GPU.

With GM107 being built from 5 SMMs, GM204 is a bit more than a triple GM107. Altogether NVIDIA is building GM204 out of 16 SMMs, this time divided up into 4 GPCs instead of GM107’s single GPC. This is bound to 64 ROPs and 4 64bit memory controllers, which is a 4x increase in the number of ROPs compared to GM107, and a 2x increase in the memory bus size.

Drilling down to the SMMs for a second, there are a couple of small changes that need to be noted. Organizationally the GM204 SMM is identical to the GM107 SMM, however GM204 gets 96KB of shared memory versus 64KB on GM107. Separate from the combined L1/texture cache, this shared memory services a pair of SMMs and their associated texture units to further reduce the need to go to L2 cache or beyond.

The Polymorph Engines have also been updated. There are not any major performance differences with the 3.0 engines, but they are responsible for implementing some of the new functionality we’ll reference later.

Other than this, GM204’s SMM is identical to the GM107 SMM. This includes the use of 4 shared texture units per 2 SMMs, leading to a 16:1 compute-to-texture ratio, and a 512Kb register file for each SMM.

Compared to GK104 of course this is a more remarkable change. Compared to its immediate predecessor, GM204 sees significant differences in both the layout of the SMM and of the resulting chip, which means that even before accounting for feature differences we can’t just start counting functional units and directly comparing GM204 to GK104. GM204 is overall a more efficient chip, and although it possesses just 33% more CUDA cores than GK104 its performance advantage is much greater, on the order of 50% or more, highlighting the fact that NVIDIA is getting more work out of their CUDA cores than ever before. Altogether, NVIDIA tells us that on average they’re getting 40% more performance per core, which is one of the reasons why GTX 980 can beat even the full GK110 based GTX 780 Ti, with its 2880 CUDA cores.

Compute hardware aside, fleshing out GM204 is of course the ROP/memory partitions. Although the constituent hardware hasn’t changed much – we’re still looking at 7GHz GDDR5 memory controllers and the same pixel throughput per ROP – GM204 is very atypical for its configuration of these parts.

Until now, high-end NVIDIA designs have used an 8:1 ratio; 8 ROPs (or rather ROPs that process 8 pixels per clock) paired up with each 64bit memory controller. This gave GK104 32 ROPs, GK110 48 ROPs, and GM107 16 ROPs. However beginning with GM204 NVIDIA has increased the ROP-to-memory ratio and as a result has doubled their total ROP count compared to GK104. GM204 features a 16:1 ratio, giving us our first NVIDIA GPU with 64 ROPs.

Now the subject of ROPs is always a dicey one because of the nature of pixel operations. Unlike compute hardware, which can be scaled up rather effectively with more complex workloads and better caching methods, the same is not true for ROPs. ROPs are the ultimate memory bandwidth burner. They are paired with memory controllers specifically because the work they do – the Z testing, the pixel blending, the anti-aliasing – devours immense amounts of bandwidth. As a result, even if you are bottlenecked by ROP performance increasing the ROP count won’t necessarily be performance effective if those ROPs are going to be bandwidth starved.

NVIDIA ROP To Memory Controller Ratios
GPU ROP:MC Ratio Total ROPs
Maxwell (GM204) 16:1 64
Maxwell (GM107) 8:1 16
Kepler (GK110) 8:1 48
Fermi (GF110) 8:1 48
GT200 4:1 32

The last time NVIDIA increased their ROP ratio was for Fermi, when it went from 4:1 to 8:1. This was largely fueled by the introduction of GDDR5, whose higher data rates provided the bandwidth necessary to feed the greater number of ROPs. Since then GDDR5 clockspeeds have increased a bit for NVIDIA, from 4GHz to 7GHz, but so have ROP clockspeeds as well, meaning there hasn’t been a significant change in the ability for NVIDIA’s memory controllers to feed their ROPs since Fermi.

Consequently making the jump to a 16:1 means that change would need to happen somewhere else. This has led to NVIDIA approaching the problem from the other direction: instead of increasing the available memory bandwidth, what can they do to reduce it?

Color Compression

The solution, and really the key to making a 16:1 ROP ratio feasible, is the latest generation of NVIDIA’s delta color compression technology. Color compression in and of itself is not new technology, but over successive hardware generations NVIDIA has continued to iterate on it, and as such has continued to increase the amount of data they can compress.

NVIDIA first introduced color compression on the GeForce FX series, where it could compress data at up to a 4:1 ratio. The actual compressibility of any frame would in turn depend on the contents of the frame. At a most basic level NVIDIA would break down a frame into regions and then attempt to find smaller portions of redundant data to compress. Anti-aliasing was especially favorable here, as anti-aliasing samples would frequently all be of a fully covered triangle, resulting in all pixels being identical. In the case of regular color compression the key is finding whole regions of identical colors, at which point you could potentially compress them down by as much as 8:1.

More recently, in Fermi NVIDIA introduced delta color compression, which is designed to take color compression beyond simple regions containing identical pixels. Delta color compression is essentially focused on pattern compression instead of region compression, compressing based on the differences (delta) between pixels rather than how they’re identical; if you can describe how the pixels will differ from one-another, then you can save space describing the delta instead of the individual pixel. Delta color compression works off of the same blocks and essentially applies different delta patterns to them, attempting to find the best pattern for the block.

Delta compression is by its nature less efficient than whole color compression, topping out at just 2:1 compared to 8:1 for the latter. However a 2:1 ratio is still potentially a 50% reduction in data size, which is far better than letting the data go uncompressed. At 4x2 32bit pixels per region, this would mean reducing a region from 32 bytes to 16 bytes.

NVIDIA’s 3rd generation of color compression then is the latest iteration on this technology. The fundamentals between the various generations of delta color compression have not changed, but with each iteration NVIDIA has gained the ability to apply more and more patterns to the blocks to find better matches. 3rd generation delta color compression offers the most patterns yet, and the most opportunity to compress pixel blocks.

The importance of color compression cannot be understated. The impact of 3rd generation delta color compression is enough to reduce NVIDIA’s bandwidth requirements by 25% over Kepler, and again this comes just from having more delta patterns to choose from. In fact color compression is so important that NVIDIA will actually spend multiple cycles trying different compression ratios, simply because the memory bandwidth is more important than the computational time.

Getting back to our ROPs then, it’s the introduction of 3rd generation color compression, which alongside the larger 2MB L2 cache, makes a 16:1 ROP ratio on GM204 viable. Being able to feed 64 ROPs in turn helps NVIDIA’s overall performance, especially at higher resolutions. With 4K monitors taking off NVIDIA needs to be able to offer competitive performance at those resolutions, and while doubling the number of ROPs won’t double NVIDIA’s performance, it none the less is an essential part of being able to scale up performance for the needs of 4K. AMD for their part already went to 64 ROPs on their high-end GPU with Hawaii last year, and while the subject isn’t nearly as simple as just comparing ROP counts, it was one of the factors that resulted in the superior 4K performance scaling we saw from Hawaii cards.

Die Size & Power

Last but certainly not least, now that we’ve had a chance to discuss the architecture of GM204, let’s talk about its physical properties.

One of the problems posed by remaining on the 28nm process is that increasing CUDA core counts will result in larger GPUs. NVIDIA has actually done quite a bit of work on chip density, and as a result the increase in chip size is not going to be as great as the increase in the underlying hardware. Still, GM204 is a more powerful and more complex chip than GK104, and as a result die size and transistor count has gone up.

GM204 ends up weighing in at 5.2 billion transistors, with a die size of 398mm2. This compares to 3.54B transistors and a die size of 294mm2 for GK104, and 7.1B transistors and 551mm2 for GK110. Compared to either Kepler design the overall transistor density is improved, albeit not significantly so.

More important is the fact that GM204 ends up being NVIDIA’s largest xx4 class GPU. xx4 GPUs are typically NVIDIA’s midrange to high-end consumer workhorses, designed first and foremost for graphics and not packing the advanced compute features such as high speed FP64 and ECC memory support that we see in the big x00/x10 GPUs. For cost and overlap reasons NVIDIA’s sweet spot up until now has been around 300-350mm2, with GK104 coming in a hair ahead of the curve. But at just shy of 400mm2, GM204 is encroaching on newer, larger territory.

To some degree this is an inevitable result of remaining on the 28nm process. More performance requires more transistors, and as a result die size was destined to go up. None the less the fact that NVIDIA is fabricating such a large GPU as an xx4 GPU is remarkable. It provides a good example of just how much hardware (in terms of transistors) NVIDIA had to throw in to reach their performance goals. Alternatively, it’s telling that NVIDIA is now going to be able to use a 398mm2 chip as the basis of their high-end consumer video card, as opposed to having to use a 551mm2 chip in the form of GK110.

What’s particularly interesting though is that despite the big die, NVIDIA’s power consumption is exceptionally low. By historical standards GK104 was already a low power GPU for its size, this being the case particularly for GTX 680. GTX 680 was a 195W TDP part with a GPU Boost 1.0 power target of 170W. The GM204 based GTX 980 on the other hand, despite packing in nearly 1.5B more transistors for another 104mm2 of die size, actually consumes less power than said GK104 based card. At 165W TDP NVIDIA’s energy efficiency optimizations are in full effect, and it means NVIDIA consumes surprisingly little power for such a large GPU.

Impressively, all of this comes at the same time that NVIDIA is clocking the GPU at over 1.2GHz. This means we are not looking at a simple case of wide-and-slow, as is often the case for power optimized GPUs (see: SoCs). NVIDIA is clocking GM204 high and hitting it with over 1.2v, and yet it’s still able to maintain a 165W TDP in spite of its large die size. We’ll look at the competitive ramifications of this later, but to keep power consumption so low on such a large GPU really is a feather in NVIDIA’s cap.

Maxwell 1 Architecture: The Story So Far Maxwell 2’s New Features: Direct3D 11.3 & VXGI
Comments Locked

274 Comments

View All Comments

  • hojnikb - Friday, September 19, 2014 - link

    7950 (which was then rebranded to 280) had 200W. With 280, they obviously upped the TDP for longer turbo speeds.
  • ArtForz - Saturday, September 20, 2014 - link

    Wasn't the 280 more of a rebranded 7950 boost (925 turbo), and not a 7950 (825, no turbo at all)?
  • Mr Perfect - Friday, September 19, 2014 - link

    True, but the 285 didn't live up to the 180 watt claim. Later in the article they showed it saving only 13 watts under load when compared to the 280. So more like 237 watts?

    http://www.anandtech.com/show/8460/amd-radeon-r9-2...

    Which was really quite disappointing. I need something to cram in my mITX rig, and it has to be close to the 150 watts of the 6870 in there now.
  • Samus - Friday, September 19, 2014 - link

    On a per-watt scale, AMD's GPU's are now as inefficient as their CPU's when compared to the competition. It's good they got those console contracts, because they probably won't be getting the next round if this keeps up.

    Absolutely amazing Maxwell is twice as efficient per watt as GCN 1.2
  • Laststop311 - Friday, September 19, 2014 - link

    well looks like the gtx 970 is calling your name then
  • Alexvrb - Saturday, September 20, 2014 - link

    That seems to depend on the design reviewed. THG tested a similarly clocked card by a different manufacturer and there was a much larger gap between the 280 and 285 in terms of power consumption.

    With that being said the 980 and 970 are both extremely fast and power efficient. Especially the 970 - if it really hits the market at around that pricing wow! Incredible value.

    Strange that the 980 throttles so much at stock settings even outside of Furmark, first thing I'd do is go into the settings and fiddle a bit until it boosts consistently. But given its performance and it's not really a problem, and it can be remedied. Still, something to keep in mind especially when overclocking. I wonder how the 980 would have done with the beefier cooler from its higher-TDP predecessors, and some mild overvolting?
  • Laststop311 - Sunday, September 21, 2014 - link

    If you look in the gaming benchmarks the gpu is hitting 80C. Nvidia's design does not allow the gpu to exceed 80C so it has to lower frequencies to stay at 80C. This is the consequence of using the titan blower cooler but removing the vapor chamber lowering its cooling capability. That's why I don't get why all these people are rushing to buy the reference design gtx 980's as they are all sold out. They are throttling by hundreds of mhz because the titan blower cooler without a vapor chamber sucks. Custom cooling options are going to make the gtx 980 able to reliably hit 1300-1400 mhz some probably even 1500 mhz under full load and still stay under the 80C limit. Keep an eye out for MSI's twin frozr V design. It's going to have a beefy radiator with 2x 100mm fans in an open air design allowing WAY more cooling potential then the reference design. The twin frozr V design should allow the card to OC and actually keep those OC frequencies under heavy load unlike the reference card which cant even keep up with its stock setting under intense gaming. We should see a pretty big performance jump going to custom coolers and the reference performance is already staggering
  • Alexvrb - Sunday, September 21, 2014 - link

    Reviewers and "tech enthusiasts" alike jumped all over AMD when they didn't adequately cool their 290 cards. So while I don't disagree with what you're saying, I am just surprised that they would let it ship with such heavy throttling on ordinary games. Especially given that in this case it isn't because Nvidia shipped with a cooler that isn't sufficient - rather it's because by default the fan is running too slowly. Even without the vapor chamber, I bet it would be fine if they just turned up the fan just a hair. Not enough to make it loud, but enough to bring it in line with some of the other high-end cards here (under a load).

    Anyway I suspect the vapor chamber will return in a higher-end "980 Ti" type configuration. In the meantime, yeah I'd keep an eye out for high-end aftermarket designs with a more aggressive power delivery system and wicked cooling. There's no doubt these chips have serious potential! I'd bet an aggressive 970 could hit the market for under $400 with 980-like performance and a factory warranty. :D

    I'd say "poor AMD" but this kind of leapfrogging is nothing new. Even if AMD can't come out with something really impressive in the next several months, they can always remain competitive by dropping prices. My GPU is idle outside of gaming so the actual difference in power consumption in terms of dollars is tiny. Now, for number-crunching rigs that run their GPUs 24/7... that's a different story altogether. But then again, AMD's professional cards have good DP numbers so it's kind of a wash.
  • Hixbot - Monday, September 22, 2014 - link

    I'm very disappointed they got rid of the vapor chamber. I'm not a fan of the 3rd party coolers as they exhaust the air into the case (big deal for small form factor PCs). I prefer the blower cooler even though they are noisier, the loss of the vapor chamber is a big deal.
  • Viewgamer - Friday, September 19, 2014 - link

    LOL people screaming at the 285. It actually consumes less power than the 980 and 970 not more.
    Nvidia greatly understated the TDP of the 980 and 970 to put it lightly.
    Both cards consume more power than the 250W TDP 7970 Ghz yet they're somehow rated at 165W and 145W how laughable !
    http://i.imgur.com/nfueVP7.png

Log in

Don't have an account? Sign up now