Maxwell’s Feature Set: Kepler Refined

To start our look at the Maxwell architecture, we’ll start with a look at the feature set, as this will be the shorter and easier subject to begin with.

In short, Maxwell only offers a handful of new features compared to Kepler. Kepler itself was a natural evolution of Fermi, further building on NVIDIA’s SM design and Direct3D 11 functionality. Maxwell in turn is a smaller evolution yet.

From a graphics/gaming perspective there will not be any changes. Maxwell remains a Direct3D 11.0 compliant design, supporting the base 11.0 functionality along with many (but not all) of the features required for Direct3D 11.1 and 11.2. NVIDIA as a whole has not professed much of an interest in being 11.1/11.2 compliant – they weren’t in a rush on 10.1 either – so this didn’t come as a great surprise to us. Nevertheless it is unfortunate, as NVIDIA carries enough market share that their support (or lack thereof) for a feature is often the deciding factor whether it’s used. Developers can still use cap bits to access the individual features of D3D 11.1/11.2 that Maxwell does support, but we will not be seeing 11.1 or 11.2 becoming a baseline for PC gaming hardware this year.

On the other hand this means that for the purposes of the GeForce family the GTX 750 series will fit in nicely into the current stack, despite the architectural differences. As a consumer perspective is still analogous to a graphics perspective, Maxwell does not have any features that will explicitly set it apart from Kepler. All 700 series parts will support the same features, even NVIDIA ecosystem features such as GameWorks, NVENC, and G-Sync, so Maxwell is fully aligned with Kepler in that respect.

At a lower level the feature set has only changed to a slightly greater degree. I/O functionality is identical to Kepler, with 4 display controllers backing NVIDIA’s capabilities. HDMI 1.4 and DisplayPort 1.2 functionality join the usual DVI support, with Maxwell being a bit early to support any next generation display connectivity standards.

Video Encode & Decode

Meanwhile turning our gaze towards video encoding and decoding, we find one of the few areas that has received a feature upgrade on Maxwell. NVENC, NVIDIA’s video encoder, has received an explicit performance boost. NVIDIA tells us that Maxwell’s NVENC should be 1.5x-2x faster than Kepler’s NVENC, or in absolute terms capable of encoding speeds 6x-8x faster than real time.

For the purposes of the GTX 750 series, the impact of this upgrade will heavy depend on how NVENC is being leveraged. For real time applications such as ShadowPlay and GameStream, which by the very definition can’t operate faster than real time, the benefit will primarily be a reduction in encoding latency by upwards of several milliseconds. For offline video transcoding using utilities such as Cyberlink’s MediaEspresso, the greater throughput should directly translate into faster transcoding.

The bigger impact of this will be felt in mobile and server applications, when GM107 makes its introduction in those product lines. In the case of mobile usage the greater performance of Maxwell’s NVENC block directly corresponds with lower power usage, which will reduce the energy costs of using it when operating off of a battery. Meanwhile in server applications the greater performance will allow a sliding scale of latency reductions and an increase in the number of client sessions being streamed off of a single GPU, which for NVIDIA’s purposes means they will get to increase the client density of their GRID products.

Speaking of video, decoding is also receiving a bit of a lift. Maxwell’s VP video decode block won’t feature full H.265 (HEVC) support, but NVIDIA is telling us that they will offer partial hardware acceleration, relying on a mix of software and hardware to decode H.265. We had been hoping for full hardware support on Maxwell, but it looks like it’s a bit premature for that in a discrete GPU. The downside to this is that the long upgrade cycle for video cards – many users are averaging 4 years these days – means there’s a good chance that GTX 750 owners will still be on their GTX 750 cards when H.265 content starts arriving in force, so it will be interesting to see just how much of the process NVIDIA can offload onto their hardware as it stands.

H.265 aside, video decoding overall is getting faster and lower power. NVIDIA tells us that decoding is getting a 8x-10x performance boost due to the implementation of a local decoder cache and an increase in memory efficiency for video decoding. As for power consumption, combined with the aforementioned performance gains, NVIDIA has implemented a new power state called “GC5” specifically for low usage tasks such as video playback. Unfortunately NVIDIA isn’t telling us much about how GC5 works, but as we’ll see in our benchmarks there is a small but distinct improvement in power consumption in the video decode process.

Introducing Maxwell Maxwell: Designed For Energy Efficiency
Comments Locked

177 Comments

View All Comments

  • Mondozai - Wednesday, February 19, 2014 - link

    Wait for 800 series budget cards if you have the patience. Hopefully no more than 4-5 months if TSMC does very well on 20.
  • Jeffrey Bosboom - Wednesday, February 19, 2014 - link

    I understand the absolute hashrate on these cards will be low, but I'm interested to know how the focus on power consumption improves mining performance per watt. (Though I can't imagine this lowish-end cards would be used, even if efficient, due to the fixed cost of motherboards to put them in.)
  • Antronman - Wednesday, February 19, 2014 - link

    Nvidia's best cards have tiny hash rates compared to 95% of every AMD GPU ever released.
  • JarredWalton - Wednesday, February 19, 2014 - link

    Apparently you're not up to speed on the latest developments. GTX 780 Ti as an example is now hitting about 700 KHash in scrypt, and word is the GTX 750 will be pretty competitive with 250-260 KHash at stock and much lower power consumption. Some people have actually put real effort into optimizing CUDAminer now, so while AMD still has an advantage, it's not nearly as large as it used to be. You could even make the argument that based on perf/watt in mining, some of NVIDIA's cards might even match AMD's top GPUs.
  • darthrevan13 - Wednesday, February 19, 2014 - link

    Why did they chose to retire 650 Ti Boost and replace it with 750Ti? 650 Ti B is a much better card for high end games because of the memory interface. They should have marketed 750Ti as 750 and 750 as 740.

    And why on earth did they not include full support for HEVEC and DX11.2? You're limiting the industry's adoption for years to come because of you're move. I hope they will fix this in the next generation 800 cards or when they will transition to 20nm.
  • Ryan Smith - Thursday, February 20, 2014 - link

    Not speaking for NV here, but keep in mind that 650 Ti Boost is a cut-down GK106 chip. All things considered, 750 Ti will be significantly cheaper to produce for similar performance.

    NVIDIA really only needed it to counter Bonaire, and now that they have GM107 that's no longer the case.
  • FXi - Wednesday, February 19, 2014 - link

    No DX 11.2 or even 11.1 support? For THAT price??
    Pass...
  • rish95 - Wednesday, February 19, 2014 - link

    According to GeForce.com it supports 11.2. Not sure what's up with this:

    http://www.geforce.com/hardware/desktop-gpus/gefor...
  • willis936 - Wednesday, February 19, 2014 - link

    You don't need to be compliant to support something. Compliance means you meet all required criteria. Support means you can run it without having necessarily all the bells and whistles. If console hardware has DX compliance then the devs will take advantage of that and when they're ported you'll lose some of the neat graphics tricks. They might still be able to be done in software, you'll just need a bigger GPU to get the same frame rates :p Some things might not be able to be done in software though. Idk enough about DX to say.
  • sourav - Wednesday, February 19, 2014 - link

    does it will support on a pci v2?

Log in

Don't have an account? Sign up now