The GPU

Despite the Denver surprise, the big story behind Tegra K1 is its GPU. Prior to K1, all previous Tegra designs implemented some derivative of what became known as the GeForce ULP core. This was a non-unified architecture that, at times, looked a lot like NV40. The design was never all that impressive from a performance or power efficiency standpoint. It was cost effective and often constrained by a narrow memory interface.

Going into Project Logan, which became Tegra K1, NVIDIA made the decision (around 3 years ago) to abandon the GeForce ULP roadmap and instead combine mobile and PC GPU roadmaps. Tegra K1 would be the first design to leverage a PC GPU, in this case Kepler. The bigger implication is that all future Tegra SoCs will integrate PC GPUs. The even crazier part of all of this is that all future NVIDIA GPUs will start out as mobile first designs (including Maxwell). Productization and market availability may happen in a different order, but all architectures will start as mobile designs and then be adopted to fit other, higher power segments. This is very much like Intel’s mobile-first realization of the mid-2000s with regards to notebook processors, but with NVIDIA and smartphone/tablet GPUs.

Kepler makes the move into mobile largely unchanged. This is a full Kepler implementation with the same size register file, shared L1 and is 100% ISA compatible with its big brother. It turns out that Kepler, as it was originally designed, was pretty good for mobile. If you take a GeForce 740M (2 SMX/384 CUDA core design), you’re looking at roughly a 19W GPU. Of that 19W, around 3W is memory IO, PCIe and other non-GPU things. You can subtract another 6W for leakage, bringing you down to 10W. Now that’s a 2 SMX design, so divide it in half and now you’re down to 5W. Drop the clock from 1GHz down to 900MHz, and the voltage as well, and now we’re talking around 2 - 3W for the GPU core and that’s without any re-architecting. Granted you can’t just subtract out things like leakage like that, but you get the point. Kepler wasn’t a bad starting point for a good mobile GPU design.

Tegra K1 features a single SMX (in a single GPC), which amounts to 192 CUDA cores. NVIDIA made the rookie mistake of calling Tegra K1 a 192-core processor, which made for some great headlines but largely does the industry a disservice.

Tessellation and geometry engines aren’t crippled compared to desktop Kepler. FP64 support is also present, at 1/24 the FP32 rate. There are 4 ROPs and 8 texture units, down from 16 in the PC version of Kepler. The big changes however are in the interconnects between all of the parts of the GPU.

The bigger implementations of Kepler have to be able to efficiently move data between multiple SMXes, ROPs and memory controllers. The interconnect fabric needed to do that doesn’t scale down well for mobile, where in many cases we’re dealing with one or two of those things instead of a dozen. By removing the complexity that exists in the bigger Kepler’s fabric you limit the ability for mobile Kepler to scale, but then again mobile Kepler is never going to scale to the sizes of big desktop GPUs so it’s not an issue. There are other changes outside of interconnect, with improved clock gating among other focuses on power efficiency.

NVIDIA updated the texture units to support ASTC, something that isn’t present in the desktop Kepler variants at this point. NVIDIA also hopes to use the GPU’s color compression features to reduce memory bandwidth requirements in UI rendering and not just 3D games.

With the changes NVIDIA made to the design, Kepler ends up being a < 2W GPU perfect for mobile. NVIDIA provided us with some data showing SoC + DRAM power while running GFXBench 3.0 (Manhattan), an OpenGL ES 3.0 test:

The data is presented in NVIDIA’s usual way where we’re not looking at peak performance but rather how Tegra K1 behaves when normalized to the performance of Apple’s A7 or Qualcomm’s Snapdragon 800. In both cases NVIDIA is claiming the ability to deliver equal performance at substantially better power efficiency.

NVIDIA shared some live demos that echoed the data above. Peak performance was capped to that of the A7 or Snapdragon 800, but SoC level power was always lower. It remains to be seen what power consumption looks like in a shipping configuration (which is almost always optimized for peak performance not equal performance at lower power), but it’s safe to say that concerns about Kepler being too power hungry for mobile are overrated.

The most compelling argument in favor of putting Kepler in a mobile SoC actually has to do with its API support. In one swift move NVIDIA goes from being disappointing in API support to industry leading. Since this is a full Kepler implementation (just a lower power/performing version) Tegra K1 maintains full API compatibility with NVIDIA’s flagship GeForce products. OpenGL ES 3.0 is supported but so are full OpenGL 4.4, DX11 and CUDA 6.0.

NVIDIA made it a point to say that high-end games developed for the PC or even current generation consoles could be ported over to Tegra K1 without issue. It’s perhaps over reaching a bit to claim the latter given the delta in performance (which NVIDIA hopes to make up in 4 generations!), but you can definitely argue that titles built for the previous generation of consoles (Xbox 360/PS3) could easily be ported to Tegra K1.

At its CES press conference NVIDIA teased the idea that Tegra K1 is actually more powerful than the last generation of consoles. The slide below attempts to drive that point home:

With a GPU clock of 950MHz (admittedly, a bit on the high end), NVIDIA can deliver substantially more raw horsepower than either previous generation console (192 CUDA cores * 2 FLOPS per core * 950MHz). Peak texture filtering performance and more importantly, memory bandwidth are lower than what was possible on these consoles but the numbers we’re talking about here aren’t substantial enough to prevent porting from happening. There may be some optimization needed but it definitely looks like Tegra K1 is the first mobile platform that can more or less run Xbox 360/PS3 titles, at least from a performance standpoint.

In pursuit of making porting and game development as simple as possible, NVIDIA demonstrated its NSight Tegra plugins for Visual Studio. Without changing the IDE that developers are used to, NSight Tegra allows developers to use the NDK toolchain all within Visual Studio. I’m not enough of a developer to know whether or not NVIDIA’s efforts in this space truly make life easy enough to port Xbox 360/PS3 games over to Android, but its VS integration demos looked convincing at least.

NVIDIA had a port of Serious Sam 3 running on Tegra K1 demo hardware just fine. Any games that are prepped for Steam OS are very easy to port over to Android. Once you make the move to OpenGL, the rest is allegedly fairly simple. The Serious Sam 3 port apparently took a matter of a couple of weeks to get ported over, with the bulk of the effort going into mapping controls to an Android environment.

CPU Option 2: Dual-Core 64-bit NVIDIA Denver Tegra K1 ISP & Video
Comments Locked

88 Comments

View All Comments

  • easp - Wednesday, January 8, 2014 - link

    So, it seems to me that 8 of these Denver cores would offer similar general purpose compute performance to a dual socket server from ~5-6 years ago, and yet, would make up a minuscule % of die area on a Tesla-class GPU die...
  • Krysto - Saturday, January 11, 2014 - link

    Some also say a Denver core should equal the Sandy Bridge core in performance, which would be quite impressive. That's what I have in my laptop, and it was a pretty high-end one 2 years ago.
  • OreoCookie - Sunday, January 12, 2014 - link

    Who wrote that, can you provide a link? I haven't seen any such claims. And I'm fairly sure nVidia would have mentioned that during the press event. Apple's A7 packs about the same punch as a Core 2 Duo, so it'd not be out of the question, but I'd be more cautious, especially seeing how high Intel's cpus turbo these days.
  • PC Perv - Saturday, January 11, 2014 - link

    How can you make so many definitive statements over what was essentially a PR pitch? It's too bad there is no "critics" or ombudsman to hold these bloggers accountable over time. (Granted that is also why these bloggers will never garner respects from mainstream media) These bloggers seemingly get away with anything they say as long as they keep their industry friends happy.

    If anyone wants to know what I am talking about, go back 2 ~ 3 years and check these clowns' articles. And check if they ever, i mean EVER, acknowledge their misjudgments or stupidity.
  • PC Perv - Saturday, January 11, 2014 - link

    For instance, do you guys have any follow up on Tegra 4i?

    http://www.anandtech.com/show/6787/nvidia-tegra-4-...

    Or ist it just the way it is with you guys? Just blow fanfare whenever OEM does a press conference, and completely forget about it in less than a year?

    Have you no shame?
  • TheJian - Tuesday, January 14, 2014 - link

    What fanfare? T4i is a Q1 product and the modem just got certified on ATT last month or so. The whole point of the T4i is the modem and phones so what is the problem? NV already showed it doing 150mbps (an update from 100mbps preview info) and this hasn't even been rolled out yet (anybody else running this besides Honk Kong/Singapore?). What do you want them to report? This product has been PULLED IN along with K1 at the cost of some T4 delay and sales. This is not news and everyone (even this NV hating site) has reported it :) T4i if late at all is only because of the modem awaiting which after checking happened Early Nov.

    Not sure this new modem speed is even interesting with caps we have today. At 50mbps on cable I can clock ~325GB or so pegged all day (that's north of 10TB/month). Even Hong Kong has a 10GB cap which is what, like 5x USA caps at 2GB usually? Even in HK that's only ONE 1080p flick and some browsing? I hope we start seeing Cell phone bill lawsuits soon that tie up these CAPPED companies so badly they are forced to stop this crap just due to litigation cost fears. But I think this is probably a pipe dream until google or someone else can offer unlimited mobile.

    IE, google mentions rolling out Gbit internet in Austin, and ATT goes on immediate defense announcing huge speed upgrades (20x faster to 300mbps) and a future upgrade past that on the books not long after. So it is terribly expensive and not doable before google, but the same week google announces their roll-out, ATT can suddenly roll-out a huge upgrade and BEAT google's roll-out dates...LOL. But to match google's prices ($70) you have to OK them spying on you...ROFL. At least Google forced the updates.
    http://www.theverge.com/2013/12/11/5200250/at-t-be...
    Then claims they can deny google access to poles a few days later:
    http://arstechnica.com/tech-policy/2013/12/why-att...
    We can only hope the city votes on 23rd (next week) to allow low pole access pricing. Hard to say google would lose offering free internet to 100 libraries and public joints in the city that the CITY chooses, but they already delayed so maybe they're stupid or bribed heavily. :)

    Maybe google just needs to announce everywhere and get ATT etc to announce matching $70 pricing then just say "just kidding". :) At worst they seem to easily force monopolies to respond as shown here. I hope they do the same in phones, amazon and apple too (heck MS also). We need all these big tech dogs to bark at cell providers big time and threaten their business models in any way they can. Competition from outsiders is sorely needed in cell or we'll be stuck with Verizon/ATT etc caps forever.
  • phoenix_rizzen - Thursday, January 16, 2014 - link

    Rogers in Canada has 150 Mbps LTE using their 2600 MHz spectrum. It's been live for about a year now.

    They ran a speedtest competition around the time they lit up the first 2600 MHz towers in Ontario, and there were a *lot* of entries showing over 90 Mbps entries. It's listed somewhere on their Redboard blog.

    My phone only does 100 Mbps LTE, and our town doesn't yet officially have LTE (there are 2 towers with it enabled out of the dozen or so towers in town), but I can get a consistent 40 Mbps on speedtests, with the occasional jump over 70.

    So, if backward old Canada can get 150 Mbps LTE working, anywhere should be able to. :)

    Oh, and 6 GB data plans are very common up here.
  • tipoo - Thursday, November 6, 2014 - link

    I wonder if the code morphing has anything to do with the Nexus 9s performance inconsistency? Does amazing in most singular benchmarks, but when thrown multitasking or unpredictable code it chokes.

Log in

Don't have an account? Sign up now