As the GPU company who’s arguably more transparent about their long-term product plans, NVIDIA still manages to surprise us time and time again. Case in point, we have known since 2012 that NVIDIA’s follow-up architecture to Kepler would be Maxwell, but it’s only more recently that we’ve begun to understand the complete significance of Maxwell to the company’s plans. Each and every generation of GPUs brings with it an important mix of improvements, new features, and enhanced performance; but fundamental shifts are fewer and far between. So when we found out Maxwell would be one of those fundamental shifts, it changed our perspective and expectations significantly.

What is that fundamental shift? As we found out back at NVIDIA’s CES 2014 press conference, Maxwell is the first NVIDIA GPU that started out as a “mobile first” design, marking a significant change in NVIDIA’s product design philosophy. The days of designing a flagship GPU and scaling down already came to an end with Kepler, when NVIDIA designed GK104 before GK110. But NVIDIA still designed a desktop GPU first, with mobile and SoC-class designs following. However beginning with Maxwell that entire philosophy has come to an end, and as NVIDIA has chosen to embrace power efficiency and mobile-friendly designs as the foundation of their GPU architectures, this has led to them going mobile first on Maxwell. With Maxwell NVIDIA has made the complete transition from top to bottom, and are now designing GPUs bottom-up instead of top-down.

Nevertheless, a mobile first design is not the same as a mobile first build strategy. NVIDIA has yet to ship a Kepler based SoC, let alone putting a Maxwell based SoC on their roadmaps. At least for the foreseeable future discrete GPUs are going to remain as the first products on any new architecture. So while the underlying architecture may be more mobile-friendly than what we’ve seen in the past, what hasn’t changed is that NVIDIA is still getting the ball rolling for a new architecture with relatively big and powerful GPUs.

This brings us to the present, and the world of desktop video cards. Just less than 2 years since the launch of the first Kepler part, the GK104 based GeForce GTX 680, NVIDIA is back and ready to launch their next generation of GPUs as based on the Maxwell architecture.

No two GPU launches are alike – Maxwell’s launch won’t be any more like Kepler’s than Kepler was Fermi’s – but the launch of Maxwell is going to be an even greater shift than usual. Maxwell’s mobile-first design aside, Maxwell also comes at a time of stagnation on the manufacturing side of the equation. Traditionally we’d see a new manufacturing node ready from TSMC to align with the new architecture, but just as with the situation faced by AMD in the launch of their GCN 1.1 based Hawaii GPUs, NVIDIA will be making do on the 28nm node for Maxwell’s launch. The lack of a new node means that NVIDIA would either have to wait until the next node is ready, or launch on the existing node, and in the case of Maxwell NVIDIA has opted for the latter.

As a consequence of staying on 28nm the optimal strategy for releasing GPUs has changed for NVIDIA. From a performance perspective the biggest improvements still come from the node shrink and the resulting increase in transistor density and reduced power consumption. But there is still room for maneuvering within the 28nm node and to improve power and density within a design without changing the node itself. Maxwell in turn is just such a design, further optimizing the efficiency of NVIDIA’s designs within the confines of the 28nm node.

With the Maxwell architecture in hand and its 28nm optimizations in place, the final piece of the puzzle is deciding where to launch first. Thanks to the embarrassingly parallel nature of graphics and 3D rendering, at every tier of GPU – from SoC to Tesla – GPUs are fundamentally power limited. Their performance is constrained by the amount of power needed to achieve a given level of performance, whether it’s limiting clockspeed ramp-ups or just building out a wider GPU with more transistors to flip. But this is especially true in the world of SoCs and mobile discrete GPUs, where battery capacity and space limitations put a very hard cap on power consumption.

As a result, not unlike the mobile first strategy NVIDIA used in designing the architecture, when it comes to building their first Maxwell GPU NVIDIA is starting from the bottom. The bulk of NVIDIA’s GPU shipments have been smaller, cheaper, and less power hungry chips like GK107, which for the last two years has formed the backbone of NVIDIA’s mobile offerings, NVIDIA’s cloud server offerings, and of course NVIDIA’s mainstream desktop offerings. So when it came time to roll out Maxwell and its highly optimized 28nm design, there was no better and more effective place for NVIDIA to start than with the successor to GK107: the Maxwell based GM107.

Over the coming months we’ll see GM107 in a number of different products. Its destiny in the mobile space is all but set in stone as the successor to the highly successful GK107, and NVIDIA’s GRID products practically beg for greater efficiency. But for today we’ll be starting on the desktop with the launch of NVIDIA’s latest desktop video cards: GeForce GTX 750 Ti and GeForce GTX 750.

Maxwell’s Feature Set: Kepler Refined


View All Comments

  • Mondozai - Wednesday, February 19, 2014 - link

    On a PC you must count all costs. The hardware case, if it is a cheap gamimg PC you want at least a decent mouse and KB. You will not get under 400 dollars.

    Plus, you again miss the point of consoles: exclusive games and convenience. Most people do not know how to build their own PC if they buy disparate parts from all over. So factor in assembly costs as well.
  • npz - Wednesday, February 19, 2014 - link

    The 2 CUs are NOT disabled. They are simply reserved. 1280 cores are still available. Any PC program or game doing GPGPU (i.e. physics, mechanics, MS DirectCompute operations) will result in the about same amount being "reserved" on the GPU for non-graphical tasks as well.

    In addition the PS4 is not exactly Pitcairn since it also has TrueAudio--something that came with Bonaire

    And where the hell are you getting your figures from? The R7 260X has 896 cores @ 1Ghz. The R7 265 only has 1024 cores @ 1.1Ghz. Simply going by theoretical GFLOPS is meaningless. It doesn't take into account programming architecture as well as rendering process for ROPs which again, the PS4 GPU has more of
  • npz - Wednesday, February 19, 2014 - link

    You're also being dishonest by trying to add the cost of a game (and why just 1?) And a subscription. So? How does that not apply to PC as well? At least there's only one cheap annual subscription that gives you access for all online gaming. With PC gaming there are multiple models each from different publishers. Reply
  • Antronman - Tuesday, February 18, 2014 - link

    Wow. What do they think, everybody here is an OC pro who has/had world records and has a monster closed loop browsing/gaming/work setup? I don't give a damn about lower power consumption if it means I have to OC the balls off the card! Reply
  • moozoo - Tuesday, February 18, 2014 - link

    Please include at least one fp64 benchmark in the compute section.
    It is great that you found out and reported the fp64 ratio.
    Its a pity there isn't at least one low power low profile card with good DP Gflops (at least enough to beat the CPU and form a compelling argument to switch API's)
    At work we only get small form factor PCs, and asking for anything that looks different ends in politics.
  • Ryan Smith - Thursday, February 20, 2014 - link

    For the moment FP64 data is available via Bench. This being a mainstream consumer card, it's purposely not built for high FP64 performance; FP64 is there for compatibility purposes rather than being able to do much in the way of useful work.

    This is a purposeful market segmentation move that won't be going anywhere. So cards such as the 750 Ti will always be very slow at FP64.
  • jrs77 - Tuesday, February 18, 2014 - link

    Now we need a manufacturer to release a GTX750 with single-slot cooler. Reply
  • koolanceGamer - Tuesday, February 18, 2014 - link

    While all of this "low power" stuff is a little boring to me (not that anything is really pushing the high end card) I hope that in the not too distant future even the video cards like the 780/Titan will be able to be powered by the PCI alone.

    I would love to do a gaming build with a PCI based SSD and no cables coming off the video cards, it would be so clean!
  • EdgeOfDetroit - Tuesday, February 18, 2014 - link

    Well I want laser light circuit cables. So much faster than copper and they would look so clean, you wouldn't even know there was a cable there unless you put your hand into the laser beams to see the pretty lights...

    ... Ahh crap another BSOD, these laser cables suck!
  • Devo2007 - Wednesday, February 19, 2014 - link

    Starting to wonder what a good card to replace a GTX 560 Ti would be (that's still relatively affordable). Would I have to step up to something like the R9 270 or GTX 760 cards to make things worthwhile? The power savings of the GTX 750 Ti aren't really a big factor as I'm currently using a 650w PSU, but I also don't want to spend a ton of money. Reply

Log in

Don't have an account? Sign up now