Maxwell’s Feature Set: Kepler Refined

To start our look at the Maxwell architecture, we’ll start with a look at the feature set, as this will be the shorter and easier subject to begin with.

In short, Maxwell only offers a handful of new features compared to Kepler. Kepler itself was a natural evolution of Fermi, further building on NVIDIA’s SM design and Direct3D 11 functionality. Maxwell in turn is a smaller evolution yet.

From a graphics/gaming perspective there will not be any changes. Maxwell remains a Direct3D 11.0 compliant design, supporting the base 11.0 functionality along with many (but not all) of the features required for Direct3D 11.1 and 11.2. NVIDIA as a whole has not professed much of an interest in being 11.1/11.2 compliant – they weren’t in a rush on 10.1 either – so this didn’t come as a great surprise to us. Nevertheless it is unfortunate, as NVIDIA carries enough market share that their support (or lack thereof) for a feature is often the deciding factor whether it’s used. Developers can still use cap bits to access the individual features of D3D 11.1/11.2 that Maxwell does support, but we will not be seeing 11.1 or 11.2 becoming a baseline for PC gaming hardware this year.

On the other hand this means that for the purposes of the GeForce family the GTX 750 series will fit in nicely into the current stack, despite the architectural differences. As a consumer perspective is still analogous to a graphics perspective, Maxwell does not have any features that will explicitly set it apart from Kepler. All 700 series parts will support the same features, even NVIDIA ecosystem features such as GameWorks, NVENC, and G-Sync, so Maxwell is fully aligned with Kepler in that respect.

At a lower level the feature set has only changed to a slightly greater degree. I/O functionality is identical to Kepler, with 4 display controllers backing NVIDIA’s capabilities. HDMI 1.4 and DisplayPort 1.2 functionality join the usual DVI support, with Maxwell being a bit early to support any next generation display connectivity standards.

Video Encode & Decode

Meanwhile turning our gaze towards video encoding and decoding, we find one of the few areas that has received a feature upgrade on Maxwell. NVENC, NVIDIA’s video encoder, has received an explicit performance boost. NVIDIA tells us that Maxwell’s NVENC should be 1.5x-2x faster than Kepler’s NVENC, or in absolute terms capable of encoding speeds 6x-8x faster than real time.

For the purposes of the GTX 750 series, the impact of this upgrade will heavy depend on how NVENC is being leveraged. For real time applications such as ShadowPlay and GameStream, which by the very definition can’t operate faster than real time, the benefit will primarily be a reduction in encoding latency by upwards of several milliseconds. For offline video transcoding using utilities such as Cyberlink’s MediaEspresso, the greater throughput should directly translate into faster transcoding.

The bigger impact of this will be felt in mobile and server applications, when GM107 makes its introduction in those product lines. In the case of mobile usage the greater performance of Maxwell’s NVENC block directly corresponds with lower power usage, which will reduce the energy costs of using it when operating off of a battery. Meanwhile in server applications the greater performance will allow a sliding scale of latency reductions and an increase in the number of client sessions being streamed off of a single GPU, which for NVIDIA’s purposes means they will get to increase the client density of their GRID products.

Speaking of video, decoding is also receiving a bit of a lift. Maxwell’s VP video decode block won’t feature full H.265 (HEVC) support, but NVIDIA is telling us that they will offer partial hardware acceleration, relying on a mix of software and hardware to decode H.265. We had been hoping for full hardware support on Maxwell, but it looks like it’s a bit premature for that in a discrete GPU. The downside to this is that the long upgrade cycle for video cards – many users are averaging 4 years these days – means there’s a good chance that GTX 750 owners will still be on their GTX 750 cards when H.265 content starts arriving in force, so it will be interesting to see just how much of the process NVIDIA can offload onto their hardware as it stands.

H.265 aside, video decoding overall is getting faster and lower power. NVIDIA tells us that decoding is getting a 8x-10x performance boost due to the implementation of a local decoder cache and an increase in memory efficiency for video decoding. As for power consumption, combined with the aforementioned performance gains, NVIDIA has implemented a new power state called “GC5” specifically for low usage tasks such as video playback. Unfortunately NVIDIA isn’t telling us much about how GC5 works, but as we’ll see in our benchmarks there is a small but distinct improvement in power consumption in the video decode process.

Introducing Maxwell Maxwell: Designed For Energy Efficiency
Comments Locked

177 Comments

View All Comments

  • Mondozai - Wednesday, February 19, 2014 - link

    Anywhere outside of NA gives normal prices. Get out of your bubble.
  • ddriver - Wednesday, February 19, 2014 - link

    Yes, prices here are pretty much normal, no on rushes to waste electricity on something as stupid as bitcoin mining. Anyway, I got most of the cards even before that craze began.
  • R3MF - Tuesday, February 18, 2014 - link

    at ~1Bn transitors for 512Maxwell shaders i think a 20nm enthusiast card could afford the 10bn transistors necessary for a 4096 shaders...
  • Krysto - Tuesday, February 18, 2014 - link

    If Maxwell has 2x the P/W, and Tegra K2 arrives at 16nm, with 2 SMX (which is very reasonable expection), then Tegra K2 will have at least a 1 Teraflop of performance, if not more than 1.2 Teraflops, which would already surpass the Xbox One.

    Now THAT's exciting.
  • chizow - Tuesday, February 18, 2014 - link

    It probably won't be Tegra K2, will most likely be Tegra M1 and could very well have 3xSMM at 20nm (192x2 vs. 128x3), which according to the article might be a 2.7x speed-up vs. just a 2x using Kepler's SMX arch. But yes, certainly all very exciting possibilities.
  • grahaman27 - Wednesday, February 19, 2014 - link

    the Tegra M1 will be on 16nm finfet if they stick to their roadmap. But, since they are bringing the 64bit version sooner than expected, I dont know what to expect. BTW, it has yet to be announce what manufacturing process the 64bit version will be... we can only hope TSMC 20nm will arrive in time.
  • Mondozai - Wednesday, February 19, 2014 - link

    Exciting or f%#king embarrassing for M$? Or for the console industry overall.
  • RealiBrad - Tuesday, February 18, 2014 - link

    Looks to be an OK card when you consider that mining has caused AMD cards to sell out and push up price.

    It looks like the R7 265 is fairly close on power, temp, and noise. If AMD supply could meet demand, then the 750Ti would need to be much cheaper and would not look nearly as good.
  • Homeles - Tuesday, February 18, 2014 - link

    Load power consumption is clearly in Nvidia's favor.
  • DryAir - Tuesday, February 18, 2014 - link

    Power consumpion is way higher... give a look at TPU´s review. But price/perf is a lot beter yeah.

    Personally I'm a sucker for low power, and I will gadly pay for it.

Log in

Don't have an account? Sign up now