The Kepler Architecture: Efficiency & Scheduling

So far we’ve covered how NVIDIA has improved upon Fermi for; now let’s talk about why.

Mentioned quickly in our introduction, NVIDIA’s big push with Kepler is efficiency. Of course Kepler needs to be faster (it always needs to be faster), but at the same time the market is making a gradual shift towards higher efficiency products. On the desktop side of matters GPUs have more or less reached their limits as far as total power consumption goes, while in the mobile space products such as Ultrabooks demand GPUs that can match the low power consumption and heat dissipation levels these devices were built around. And while strictly speaking NVIDIA’s GPUs haven’t been inefficient, AMD has held an edge on performance per mm2 for quite some time, so there’s clear room for improvement.

In keeping with that ideal, for Kepler NVIDIA has chosen to focus on ways they can improve Fermi’s efficiency. As NVIDIA's VP of GPU Engineering, Jonah Alben puts it, “[we’ve] already built it, now let's build it better.”

There are numerous small changes in Kepler that reflect that goal, but of course the biggest change there was the removal of the shader clock in favor of wider functional units in order to execute a whole warp over a single clock cycle. The rationale for which is actually rather straightforward: a shader clock made sense when clockspeeds were low and die space was at a premium, but now with increasingly small fabrication processes this has flipped. As we have become familiar with in the CPU space over the last decade, higher clockspeeds become increasingly expensive until you reach a point where they’re too expensive – a point where just distributing that clock takes a fair bit of power on its own, not to mention the difficulty and expense of building functional units that will operate at those speeds.

With Kepler the cost of having a shader clock has finally become too much, leading NVIDIA to make the shift to a single clock. By NVIDIA’s own numbers, Kepler’s design shift saves power even if NVIDIA has to operate functional units that are twice as large. 2 Kepler CUDA cores consume 90% of the power of a single Fermi CUDA core, while the reduction in power consumption for the clock itself is far more dramatic, with clock power consumption having been reduced by 50%.

Of course as NVIDIA’s own slide clearly points out, this is a true tradeoff. NVIDIA gains on power efficiency, but they lose on area efficiency as 2 Kepler CUDA cores take up more space than a single Fermi CUDA core even though the individual Kepler CUDA cores are smaller. So how did NVIDIA pay for their new die size penalty?

Obviously 28nm plays a significant part of that, but even then the reduction in feature size from moving to TSMC’s 28nm process is less than 50%; this isn’t enough to pack 1536 CUDA cores into less space than what previously held 384. As it turns out not only did NVIDIA need to work on power efficiency to make Kepler work, but they needed to work on area efficiency. There are a few small design choices that save space, such as using 8 SMXes instead of 16 smaller SMXes, but along with dropping the shader clock NVIDIA made one other change to improve both power and area efficiency: scheduling.

GF114, owing to its heritage as a compute GPU, had a rather complex scheduler. Fermi GPUs not only did basic scheduling in hardware such as register scoreboarding (keeping track of warps waiting on memory accesses and other long latency operations) and choosing the next warp from the pool to execute, but Fermi was also responsible for scheduling instructions within the warps themselves. While hardware scheduling of this nature is not difficult, it is relatively expensive on both a power and area efficiency basis as it requires implementing a complex hardware block to do dependency checking and prevent other types of data hazards. And since GK104 was to have 32 of these complex hardware schedulers, the scheduling system was reevaluated based on area and power efficiency, and eventually stripped down.

The end result is an interesting one, if only because by conventional standards it’s going in reverse. With GK104 NVIDIA is going back to static scheduling. Traditionally, processors have started with static scheduling and then moved to hardware scheduling as both software and hardware complexity has increased. Hardware instruction scheduling allows the processor to schedule instructions in the most efficient manner in real time as conditions permit, as opposed to strictly following the order of the code itself regardless of the code’s efficiency. This in turn improves the performance of the processor.

However based on their own internal research and simulations, in their search for efficiency NVIDIA found that hardware scheduling was consuming a fair bit of power and area for few benefits. In particular, since Kepler’s math pipeline has a fixed latency, hardware scheduling of the instruction inside of a warp was redundant since the compiler already knew the latency of each math instruction it issued. So NVIDIA has replaced Fermi’s complex scheduler with a far simpler scheduler that still uses scoreboarding and other methods for inter-warp scheduling, but moves the scheduling of instructions in a warp into NVIDIA’s compiler. In essence it’s a return to static scheduling.

Ultimately it remains to be seen just what the impact of this move will be. Hardware scheduling makes all the sense in the world for complex compute applications, which is a big reason why Fermi had hardware scheduling in the first place, and for that matter why AMD moved to hardware scheduling with GCN. At the same time however when it comes to graphics workloads even complex shader programs are simple relative to complex compute applications, so it’s not at all clear that this will have a significant impact on graphics performance, and indeed if it did have a significant impact on graphics performance we can’t imagine NVIDIA would go this way.

What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.

With that said, in discussing Kepler with NVIDIA’s Jonah Alben, one thing that was made clear is that NVIDIA does consider this the better way to go. They’re pleased with the performance and efficiency they’re getting out of software scheduling, going so far to say that had they known what they know now about software versus hardware scheduling, they would have done Fermi differently. But whether this only applies to consumer GPUs or if it will apply to Big Kepler too remains to be seen.

The Kepler Architecture: Fermi Distilled GPU Boost: Turbo For GPUs
Comments Locked

404 Comments

View All Comments

  • will54 - Thursday, March 22, 2012 - link

    I noticed in the review they said this was based on the GF114 not the GF110 but than they mention that this is the flagship card for Nvidia. Does this mean that this will be the top Nvidia card until the GTX780 or are they going to bring out a more powerful in the next couple months based off the GF110 such as a GTX 685.
  • von Krupp - Friday, March 23, 2012 - link

    That depends entirely on how AMD responds. If AMD were to respond with a single GPU solution that convincingly trumps the GTX 680 (this is extremely improbable), then yes, you could expect GK110.

    However, I expect Nvidia to hold on to Gk110 and instead answer the dual-GPU HD 7990 with a dual-GK104 GTX 690.
  • Sq7 - Thursday, March 22, 2012 - link

    ...my 6950 still plays everything smooth as ice at ultra settings :o Eye candy check. Tesselation check. No worries check. To be honest I am not that interested in the current generation of gfx cards. When UE4 comes out I think it will be an optimal time to upgrade.

    But mostly in the end $500 is just too much for a graphics card. And I don't care if the Vatican made it. When I need to upgrade there will always be a sweet little card with my name on it at $300 - $400 be it blue or green. And this launch has just not left me drooling enough to even consider going out of my price range. If Diablo 3 really blows on my current card... Maybe. But somehow I doubt it.
  • ShieTar - Friday, March 23, 2012 - link

    That just means you need a bigger monitor. Or newer games ;-)

    Seriously though, good for you.

    I have two crossfired, overclocked 6950s feeding my 30'', and still find myself playing MMOs like SWTOR or Rift with Shadows and AA switched of, so that i have a chance to stay at > 40 FPS even in scenes with large groups of characters and effects on the screen at once. The same is true for most Offline-RPGs, like DA2 and The Witcher 2.

    I don't think I have played any games that hit 60 FPS @ 2560x1600 @ "Ultra Settings" except for games that are 5-10 years old.

    Of course, I won't be paying the $500 any more than you will (or 500€ in my case), because stepping up just one generation of GPUs never makes much sense. Even if it a solid step up as with this generation, you still pay the full price for only getting an 20% to 25% performance increase. That's why I usually skip at least one generation, like going from 2x260 to 2x6950 last summer. That's when you really get your moneys worth.
  • von Krupp - Friday, March 23, 2012 - link

    Precisely.

    I jumped up from a single GeForce 7800 GT (paired with an Athlon 64 3200+) to dual HD 7970s (paired with an i7-3820). At present, there's nothing I can't crank all the way up at 2560x1440, though I don't foresee being able to continue that within two years. I got 7 years of use out of the previous rig (2005-2012) using a 17" 1280x1024 monitor and I expect to get at least four out of this at 1920x1080 on my U2711.

    Long story short, consoles make it easy to not have to worry about frequent graphics upgrades so that when you finally do upgrade, you can get your money's worth.
  • cmdrdredd - Thursday, March 22, 2012 - link

    Why is Anandtech using Crysis Warhead still and not Crysis 2 with the High Resolution textures and DX11 modification?
  • Malih - Thursday, March 22, 2012 - link

    Pricing is better, but 7970 is not much worse than 680, like some has claimed (well, leaks).

    With similar pricing, AMD is not that far off, although It remains to be seen whether AMD will lower the price.

    For me, I'm a mainstream guy, so I'll see how the mainstream parts perform, and whether AMD will lower the price on their current mainstream (78x0), I was thinking about getting 7870, but AMD's pricing is too high for me, it gets them money on some market, but not from my pocket.
  • CeriseCogburn - Tuesday, March 27, 2012 - link

    AMD is $120 too high. That's not chump change. That's breathe down your throat game changing 1000% at any other time on anandtech !
  • nyran125 - Friday, March 23, 2012 - link

    some games it wins, others it doesnt. But a pretty damn awesome card regardless.
  • asrey1975 - Friday, March 23, 2012 - link

    Your better off with an AMD card.

    Personally, I'm stlil thinking about buying 2x 6870's to replace my 5870 which runs BF3 no problem on my 27" 1900x1200 Dell monitor.

    It will cost me $165 each so for $330 all up, its stlil cheaper than any $500 card (insert brand/model) and will totally kick ass over 680 or 7970!

Log in

Don't have an account? Sign up now