The Kepler Architecture: Efficiency & Scheduling

So far we’ve covered how NVIDIA has improved upon Fermi for; now let’s talk about why.

Mentioned quickly in our introduction, NVIDIA’s big push with Kepler is efficiency. Of course Kepler needs to be faster (it always needs to be faster), but at the same time the market is making a gradual shift towards higher efficiency products. On the desktop side of matters GPUs have more or less reached their limits as far as total power consumption goes, while in the mobile space products such as Ultrabooks demand GPUs that can match the low power consumption and heat dissipation levels these devices were built around. And while strictly speaking NVIDIA’s GPUs haven’t been inefficient, AMD has held an edge on performance per mm2 for quite some time, so there’s clear room for improvement.

In keeping with that ideal, for Kepler NVIDIA has chosen to focus on ways they can improve Fermi’s efficiency. As NVIDIA's VP of GPU Engineering, Jonah Alben puts it, “[we’ve] already built it, now let's build it better.”

There are numerous small changes in Kepler that reflect that goal, but of course the biggest change there was the removal of the shader clock in favor of wider functional units in order to execute a whole warp over a single clock cycle. The rationale for which is actually rather straightforward: a shader clock made sense when clockspeeds were low and die space was at a premium, but now with increasingly small fabrication processes this has flipped. As we have become familiar with in the CPU space over the last decade, higher clockspeeds become increasingly expensive until you reach a point where they’re too expensive – a point where just distributing that clock takes a fair bit of power on its own, not to mention the difficulty and expense of building functional units that will operate at those speeds.

With Kepler the cost of having a shader clock has finally become too much, leading NVIDIA to make the shift to a single clock. By NVIDIA’s own numbers, Kepler’s design shift saves power even if NVIDIA has to operate functional units that are twice as large. 2 Kepler CUDA cores consume 90% of the power of a single Fermi CUDA core, while the reduction in power consumption for the clock itself is far more dramatic, with clock power consumption having been reduced by 50%.

Of course as NVIDIA’s own slide clearly points out, this is a true tradeoff. NVIDIA gains on power efficiency, but they lose on area efficiency as 2 Kepler CUDA cores take up more space than a single Fermi CUDA core even though the individual Kepler CUDA cores are smaller. So how did NVIDIA pay for their new die size penalty?

Obviously 28nm plays a significant part of that, but even then the reduction in feature size from moving to TSMC’s 28nm process is less than 50%; this isn’t enough to pack 1536 CUDA cores into less space than what previously held 384. As it turns out not only did NVIDIA need to work on power efficiency to make Kepler work, but they needed to work on area efficiency. There are a few small design choices that save space, such as using 8 SMXes instead of 16 smaller SMXes, but along with dropping the shader clock NVIDIA made one other change to improve both power and area efficiency: scheduling.

GF114, owing to its heritage as a compute GPU, had a rather complex scheduler. Fermi GPUs not only did basic scheduling in hardware such as register scoreboarding (keeping track of warps waiting on memory accesses and other long latency operations) and choosing the next warp from the pool to execute, but Fermi was also responsible for scheduling instructions within the warps themselves. While hardware scheduling of this nature is not difficult, it is relatively expensive on both a power and area efficiency basis as it requires implementing a complex hardware block to do dependency checking and prevent other types of data hazards. And since GK104 was to have 32 of these complex hardware schedulers, the scheduling system was reevaluated based on area and power efficiency, and eventually stripped down.

The end result is an interesting one, if only because by conventional standards it’s going in reverse. With GK104 NVIDIA is going back to static scheduling. Traditionally, processors have started with static scheduling and then moved to hardware scheduling as both software and hardware complexity has increased. Hardware instruction scheduling allows the processor to schedule instructions in the most efficient manner in real time as conditions permit, as opposed to strictly following the order of the code itself regardless of the code’s efficiency. This in turn improves the performance of the processor.

However based on their own internal research and simulations, in their search for efficiency NVIDIA found that hardware scheduling was consuming a fair bit of power and area for few benefits. In particular, since Kepler’s math pipeline has a fixed latency, hardware scheduling of the instruction inside of a warp was redundant since the compiler already knew the latency of each math instruction it issued. So NVIDIA has replaced Fermi’s complex scheduler with a far simpler scheduler that still uses scoreboarding and other methods for inter-warp scheduling, but moves the scheduling of instructions in a warp into NVIDIA’s compiler. In essence it’s a return to static scheduling.

Ultimately it remains to be seen just what the impact of this move will be. Hardware scheduling makes all the sense in the world for complex compute applications, which is a big reason why Fermi had hardware scheduling in the first place, and for that matter why AMD moved to hardware scheduling with GCN. At the same time however when it comes to graphics workloads even complex shader programs are simple relative to complex compute applications, so it’s not at all clear that this will have a significant impact on graphics performance, and indeed if it did have a significant impact on graphics performance we can’t imagine NVIDIA would go this way.

What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.

With that said, in discussing Kepler with NVIDIA’s Jonah Alben, one thing that was made clear is that NVIDIA does consider this the better way to go. They’re pleased with the performance and efficiency they’re getting out of software scheduling, going so far to say that had they known what they know now about software versus hardware scheduling, they would have done Fermi differently. But whether this only applies to consumer GPUs or if it will apply to Big Kepler too remains to be seen.

The Kepler Architecture: Fermi Distilled GPU Boost: Turbo For GPUs
Comments Locked

404 Comments

View All Comments

  • Wreckage - Thursday, March 22, 2012 - link

    Impressive. This cards beats AMD on EVERY level! Price, performance, features, power..... every level. AMD paid the price for gouging it's customers, they are going to lose a ton of marketshare. I doubt they have anything to match this for at least a year.
  • Creig - Thursday, March 22, 2012 - link

    The review has been up for less than a minute so you couldn't possibly have read it already. How pathetic is it that you were sitting there hitting F5 repeatedly just so you could get in another "First post! Nvidia is uber!" comment.

    Get a life.
  • Grooveriding - Thursday, March 22, 2012 - link

    Haha Creig,

    Good observation, he must of been sitting there spamming to get in that first comment, before he read a word of the review.

    Sour grapes at being banned much, Wreckage ?
  • nathanddrews - Thursday, March 22, 2012 - link

    ... but he's correct. The 680 does dominate in nearly every situation and category.

    "the GTX 680 is faster, cooler, and quieter than the Radeon HD 7970. NVIDIA has landed the technical trifecta, and to top it off they’ve priced it comfortably below the competition."

    Obvioulsy Wreckage's analysis of AMD's "price gouging" and prophesies of doom are farfetched...
  • N4g4rok - Thursday, March 22, 2012 - link

    Well, yeah, the card does well in most of those tests, but i think it might be a little too far to say that it dominates the 7970 on every level.
  • cactusdog - Thursday, March 22, 2012 - link

    Just finsished looking around various sites and the 680 isnt as good as was suggested.

    Dont forget, you're basically comparing an overclocked Nvidia card to a stock AMD card, and even the base clock is much higher on the nvidia card.

    At the same clocks the results will look much better for AMD. Also, 3 monitor gaming could favour AMD with 3GB of vram.

    Seems like Nvidia really wanted to target the 7970 and the price/performance tag this time, by building a souped up, overclocked Gk104 but its not a 7970 killer. AMD will just need to sell them for $449.

    AMD can stay with the 7970 as planned until Q4 and the 8 series.
  • gamerk2 - Thursday, March 22, 2012 - link

    You know clocks aren't the only thing that determines speed? Couldn't one just as easily argue that AMD cards were better because they basically clocked their RAM so high?

    Fact is, at stock, the card beats the 7970 at a lower price. Period.
  • Meaker10 - Thursday, March 22, 2012 - link

    You're ignoring the host of factory overclocked models out there that will be quieter than the 680M and perform on a similar level, the price just needs to be dropped.
  • Kakkoii - Thursday, March 22, 2012 - link

    Herp derp, the same can be said about Nvidia cards as well. The 680 has tonnes of OC'ing headroom. The GPU boost it has is a messily overclock.
  • CeriseCogburn - Thursday, March 22, 2012 - link

    I guess the lower clocked cores of 470, 480, 570, 580 , and many other Nvidia cards were greatly cheated in all benchmarks because the amd cores were often well over 100mhz higher at stock....
    So we have had at least 3 years of lying benchmarks in amd's favor.
    I'd like to personally thank gamerk2 for this very astute observation that sheds the light of Nvidia asbolutely winning in all the above mentioned nvidia cars for the past couple of years.
    Thank you... (sarc/reality check is free)

Log in

Don't have an account? Sign up now