At the risk of sounding like a broken record, the biggest story in the GPU industry over the last year has been over what isn’t as opposed to what is. What isn’t happening is that after nearly 3 years of the leading edge manufacturing node for GPUs at TSMC being their 28nm process, it isn’t being replaced any time soon. As of this fall TSMC has 20nm up and running, but only for SoC-class devices such as Qualcomm Snapdragons and Apple’s A8. Consequently if you’re making something big and powerful like a GPU, all signs point to an unprecedented 4th year of 28nm being the leading node.

We start off with this tidbit because it’s important to understand the manufacturing situation in order to frame everything that follows. In years past TSMC would produce a new node every 2 years, and farther back yet there would even be half-nodes in between those 2 years. This meant that every 1-2 years GPU manufacturers could take advantage of Moore’s Law and pack in more hardware into a chip of the same size, rapidly increasing their performance. Given the embarrassingly parallel nature of graphics rendering, it’s this cadence in manufacturing improvements that has driven so much of the advancement of GPUs for so long.

With 28nm however that 2 year cadence has stalled, and this has driven GPU manufacturers into an interesting and really unprecedented corner. They can’t merely rest on their laurels for the 4 years between 28nm and the next node – their continuing existence means having new products every cycle – so they instead must find new ways to develop new products. They must iterate on their designs and technology so that now more than ever it’s their designs driving progress and not improvements in manufacturing technology.

What this means is that for consumers and technology enthusiasts alike we are venturing into something of an uncharted territory. With no real precedent to draw from we can only guess what AMD and NVIDIA will do to maintain the pace of innovation in the face of manufacturing stagnation. This makes this a frustrating time – who doesn’t miss GPUs doubling in performance every 2 years – but also an interesting one. How will AMD and NVIDIA solve the problem they face and bring newer, better products to the market? We don’t know, and not knowing the answer leaves us open to be surprised.

Out of NVIDIA the answer to that has come in two parts this year. NVIDIA’s Kepler architecture, first introduced in 2012, has just about reached its retirement age. NVIDIA continues to develop new architectures on roughly a 2 year cycle, so new manufacturing process or not they have something ready to go. And that something is Maxwell.

GTX 750 Ti: First Generation Maxwell

At the start of this year we saw the first half of the Maxwell architecture in the form of the GeForce GTX 750 and GTX 750 Ti. Based on the first generation Maxwell GM107 GPU, NVIDIA did something we still can hardly believe and managed to pull off a trifecta of improvements over Kepler. GTX 750 Ti was significantly faster than its predecessor, it was denser than its predecessor (though larger overall), and perhaps most importantly consumed less power than its predecessor. In GM107 NVIDIA was able to significantly improve their performance and reduce their power consumption at the same time, all on the same 28nm manufacturing node we’ve come to know since 2012. For NVIDIA this was a major accomplishment, and to this day competitor AMD doesn’t have a real answer to GM107’s energy efficiency.

However GM107 was only the start of the story. In deviating from their typical strategy of launching high-end GPU first – either a 100/110 or 104 GPU – NVIDIA told us up front that while they were launching in the low end first because that made the most sense for them, they would be following up on GM107 later this year with what at the time was being called “second generation Maxwell”. Now 7 months later and true to their word, NVIDIA is back in the spotlight with the first of the second generation Maxwell GPUs, GM204.

GM204 itself follows up on the GM107 with everything we loved about the first Maxwell GPUs and yet with more. “Second generation” in this case is not just a description of the second wave of Maxwell GPUs, but in fact is a technically accurate description of the Maxwell 2 architecture. As we’ll see in our deep dive into the architecture, Maxwell 2 has learned some new tricks compared to Maxwell 1 that make it an even more potent processor, and further extends the functionality of the family.

NVIDIA GPU Specification Comparison
  GTX 980 GTX 970 (Corrected) GTX 780 Ti GTX 770
CUDA Cores 2048 1664 2880 1536
Texture Units 128 104 240 128
ROPs 64 56 48 32
Core Clock 1126MHz 1050MHz 875MHz 1046MHz
Boost Clock 1216MHz 1178MHz 928Mhz 1085MHz
Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5
Memory Bus Width 256-bit 256-bit 384-bit 256-bit
FP64 1/32 FP32 1/32 FP32 1/24 FP32 1/24 FP32
TDP 165W 145W 250W 230W
GPU GM204 GM204 GK110 GK104
Transistor Count 5.2B 5.2B 7.1B 3.5B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Launch Date 09/18/14 09/18/14 11/07/13 05/30/13
Launch Price $549 $329 $699 $399

Today’s launch will see GM204 placed into two video cards, the GeForce GTX 980 and GeForce GTX 970. We’ll dive into the specs of each in a bit, but from an NVIDIA product standpoint these two parts are the immediate successors to the GTX 780/780Ti and GTX 770 respectively.  As was the case with GTX 780 and GTX 680 before it, these latest parts are designed and positioned to offer a respectable but by no means massive performance gain over the GTX 700 series. NVIDIA’s target for the upgrade market continues to be owners of cards 2-3 years old – so the GTX 600 and GTX 500 series – where the accumulation of performance and feature enhancements over the years adds up to the kind of 70%+ performance improvement most buyers are looking for.

At the very high end the GTX 980 will be unrivaled. It is roughly 10% faster than GTX 780 Ti and consumes almost 1/3rd less power for that performance. This is enough to keep the single-GPU performance crown solidly in NVIDIA’s hands, maintaining a 10-20% lead over AMD’s flagship Radeon R9 290X. Meanwhile GTX 970 should fare similarly as well, however as our sample is having compatibility issues that we haven’t been able to resolve in time, that is a discussion we will need to have another day.

NVIDIA will be placing the MSRP on the GTX 980 at $549 and the GTX 970 at $329. Depending on what you’re using as a baseline, this is either a $50 increase over the last price of the GTX 780 and launch price of the GTX 680, or a roughly $100 price cut compared to the launch prices of the GTX 780 and GTX 780 Ti. Meanwhile GTX 970 is effectively a drop-in replacement for GTX 770, launching at the price that GTX 770 has held for so long. We should see both GPUs at the usual places, though at present neither Newegg nor Amazon is showing any inventory yet – likely thanks to the odd time of launch as this coincides with NVIDIA's Game24 event – but you can check on GTX 980 and GTX 970 tomorrow.

Fall 2014 GPU Pricing Comparison
Radeon R9 295X2 $1000  
  $550 GeForce GTX 980
Radeon R9 290X $500  
Radeon R9 290 $400  
  $330 GeForce GTX 970
Radeon R9 280X $280  
Radeon R9 285 $250  
Radeon R9 280 $220 GeForce GTX 760

Finally, on a housekeeping note today’s article will be part of a series of articles on the GTX 980 series. As NVIDIA has only given us about half a week to look at GTX 980, we are splitting up our coverage to work within the time constraints. Today we will be covering GTX 980 and the Maxwell 2 architecture, including its construction, features, and the resulting GM204 GPU. Next week we will be looking at GTX 980 SLI performance, PCIe bandwidth, and a deeper look at the image quality aspects of NVIDIA’s newest anti-aliasing technologies, Dynamic Super Resolution and Multi-Frame sampled Anti-Aliasing. Finally, we will also be taking a look at the GTX 970 next week once we have a compatible sample. So stay tuned for the rest of our coverage on the Maxwell 2 family.

Maxwell 1 Architecture: The Story So Far


View All Comments

  • TheJian - Saturday, September 20, 2014 - link
    Did I miss it in the article or did you guys just purposely forget to mention NV claims it does DX12 too? see their own blog. Microsoft's DX12 demo runs on ...MAXWELL. Did I just miss the DX12 talk in the article? Every other review I've read mentions this (techpowerup, tomshardware, hardocp etc etc). Must be that AMD Center still having it's effect on your articles ;)

    They were running a converted elemental demo (converted to dx12) and Fable Legends from MS. Yet curiously missing info from this site's review. No surprise I guess with only an AMD portal still :(

    From the link above:
    "Part of McMullen’s presentation was the announcement of a broadly accessible early access program for developers wishing to target DX12. Microsoft will supply the developer with DX12, UE4-DX12 and the source for Epic’s Elemental demo ported to run on the DX12-based engine. In his talk, McMullen demonstrated Maxwell running Elemental at speed and flawlessly. As a development platform for this effort, NVIDIA’s GeForce GPUs and Maxwell in particular is a natural vehicle for DX12 development."

    So maxwell is a dev platform for dx12, but you guys leave that little detail out so newbs will think it doesn't do it? Major discussion of dx11 stuff missing before, now up to 11.3 but no "oh and it runs all of dx12 btw".

    One more comment on 980: If it's a reference launch how come other sites already have OC versions (IE, tomshardware has a Windforce OC 980, though stupidly as usual they downclocked it and the two OC/superclocked 970's they had to ref clocks...ROFL - like you'd buy an OC card and downclock them)? IT seems to be a launch of OC all around. Newegg even has them in stock (check EVGA OC version):
    And with a $10 rebate so only $559 and a $5 gift card also.
    "This model is factory overclocked to 1241 MHz Base Clock/1342 MHz Boost Clock (1126 MHz/1216 MHz for reference design)"

    Who would buy ref for $10 diff? IN fact the ref cards are $569 at newegg, so you save buying the faster card...LOL.
  • cactusdog - Saturday, September 20, 2014 - link

    TheJian, Wow, Did you read the article? Did you read the conclusion? AT says the 980 is "remarkable" , "well engineered", "impeccable design" and has "no competition" They covered almost all of Nvidia marketing talking points and you're going to accuse them of a conspiracy? Are you fking retarded?? Reply
  • Daniel Egger - Saturday, September 20, 2014 - link

    It would be nice to rather than just talk about about the 750 Ti to also include it in comparisons to see it clearer in perspective what it means to go from Maxwell I to Maxwell II in terms of performance, power consumption, noise and (while we are at it) performance per Watt and performance per $.

    Also where're the benchmarks for the GTX 970? I sure respect that this card is in a different ballpark but the somewhat reasonable power output might actually make the GTX 970 a viable candidate for an HTPC build. Is it also possible to use it with just one additional 6 Pin connector (since as you mentioned this would be within the specs without any overclocking) or does it absolutely need 2 of them?
  • SkyBill40 - Saturday, September 20, 2014 - link

    As was noted in the review at least twice, they were having issues with the 970 and thus it won't be tested in full until next week (along with the 980 in SLI). Reply
  • MrSpadge - Saturday, September 20, 2014 - link

    Wow! This makes me upgrade from a GTX660Ti - not because of gaming (my card is fast enough for my needs) but because of the power efficiency gains for GP-GPU (running GPU-Grid under BOINC). Thank you nVidia for this marvelous chip and fair prices! Reply
  • jarfin - Saturday, September 20, 2014 - link

    i still CANT understand amd 'uber' option.
    its totally out of test,bcoz its just 'oc'd' button,nothing else.
    its must be just r290x and not anantech 'amd canter' way uber way.

    and,i cant help that feeling,what is strong,that anatech is going badly amd company way,bcoz they have 'amd center own sector.
    so,its mean ppl cant read them review for nvidia vs radeon cards race without thinking something that anatech keep raden side way or another.
    and,its so clear thats it.

    i hope anantech get clear that amd card R9200 series is just competition for nvidia 90 series,bcoz that every1 kow amd skippedd 8000 series and put R9 200 series for nvidia 700 series,but its should be 8000 series.
    so now,generation of gpu both side is even.

    meaning that next amd r9 300 series or what it is coming amd company battle nvidia NEXT level gpu card,NOT 900 series.

    there is clear both gpu card history for net.

    thank you all

    p.s. where is nvidia center??
  • Gigaplex - Saturday, September 20, 2014 - link

    Uber mode is not an overclock. It's a fan speed profile change to reduce thermal throttling (underclock) at the expense of noise. Reply
  • dexgen - Saturday, September 20, 2014 - link

    Ryan, Is it possible to see the average clock speeds in different tests after increasing the power and temperature limit in afterburner?

    And also once the review units for non-reference cards come in it would be very nice to see what the average clock speeds for different cards with and without increased power limit would be. That would be a great comparison for people deciding which card to buy.
  • silverblue - Saturday, September 20, 2014 - link

    Exceptional by NVIDIA; it's always good to see a more powerful yet more frugal card especially at the top end.

    AMD's power consumption could be tackled - at least partly - by some re-engineering. Do they need a super-wide memory bus when NVIDIA are getting by with half the width and moderately faster RAM? Tonga has lossless delta colour compression which largely negates the need for a wide bus, although they did shoot themselves in the foot by not clocking the memory a little higher to anticipate situations where this may not help the 285 overcome the 280.

    Perhaps AMD could divert some of their scant resources towards shoring up their D3D performance to calm down some of the criticism because it does seem like they're leaving performance on the table and perhaps making Mantle look better than it might be as a result.
  • Luke212 - Saturday, September 20, 2014 - link

    Where are the SGEMM compute benchmarks you used to put on high end reviews? Reply

Log in

Don't have an account? Sign up now