Ever since its arrival in the ultra mobile space, NVIDIA hasn't really flexed its GPU muscle. The Tegra GPUs we've seen thus far have been ok at best, and in serious need of improvement at worst. NVIDIA often blamed an immature OEM ecosystem unwilling to pay for the sort of large die SoCs necessary in order to bring a high-performance GPU to market. Thankfully, that's all changing. Earlier this year NVIDIA laid out its mobile SoC roadmap through 2015, including the 2014 release of Project Logan - the first NVIDIA ultra mobile SoC to feature a Kepler GPU. Yesterday in a private event at Siggraph, NVIDIA demonstrated functional Logan silicon for the very first time.

NVIDIA got Logan silicon back from the fabs around 3 weeks ago, making it almost certain that we're dealing with some form of 28nm silicon here and not early 20nm samples.

NVIDIA isn't talking about CPU cores, but it's safe to assume that Logan will be another 4+1 arrangement of cores - likely still based on ARM's Cortex A15 IP (but perhaps a newer revision of the core). On the GPU front, NVIDIA confirmed our earlier speculation that Logan includes a single Kepler SMX:

One Kepler SMX features 192 CUDA cores. NVIDIA isn't talking about shipping GPU frequencies either, but it did provide this chart to put Logan's GPU capabilities into perspective:

Don't get too excited as we're looking at a comparison of GFLOPS and not game performance, but the peak theoretical ALU bound performance of mobile Kepler should exceed that of a Playstation 3 or GeForce 8800 GTX (memory bandwidth is another story however). If we look closely at NVIDIA's chart and compare mobile Kepler to the iPad 4, we get a better idea of what sort of clock speeds NVIDIA would need to attain this level of performance. Doing some quick Photoshop estimation it looks like NVIDIA is claiming mobile Kepler has somewhere around 5.2x the FP power of the PowerVR SGX 554MP4 in the iPad 4 (76.8 GFLOPS). That works out to be right around 400 GFLOPS. With a 192 core implementation of Kepler, you get 2 FLOPS per core or 384 FLOPS per cycle. To hit 400 GFLOPS you'd need to clock the mobile Kepler GPU at roughly 1GHz. That's certainly doable from an architectural standpoint (although we've never seen it done on any low power 28nm process), but it's probably a bit too high for something like a smartphone.

NVIDIA didn't want to talk frequencies but they did tell me that we might see something this fast in some sort of a tablet. I suspect that most implementations will be clocked significantly lower. Even at half the frequency though, we're still talking about roughly Playstation 3 levels of FP power out of a mobile SoC. We know nothing of Logan's memory subsystem, which obviously plays a major role in real world gaming performance but there's no getting around the fact that Logan's Kepler implementation means serious business. For years we've lamented NVIDIA's mobile GPUs, Logan looks like it's finally going to change that.

API Support and Live Demos
 

Unlike previous Tegra GPUs, Kepler is a fully unified architecture and OpenGL ES 3.0, OpenGL 4.4 and DirectX 11 compliant. The API compliance alone is a huge step forward for NVIDIA. It's also a big one for game developers looking to move more seriously into mobile. Epic's Tim Sweeney even did a blog post for NVIDIA talking about Logan's implementation of Kepler and how it brings feature parity between PCs, next-gen consoles and mobile platforms. NVIDIA responded in kind by running some Unreal Engine 4 demos on Android on a Logan test platform. That's really the big story behind all of this. With Logan, NVIDIA will bring its mobile GPUs up to feature parity with what it's shipping in the PC market. Game developers looking to port games between console, PC, tablet and smartphone should have an easier job of doing that if all platforms supported the same APIs. Logan will take NVIDIA from being very behind in API support (with no OpenGL ES 3.0 support) to the head of the class.

NVIDIA took its Ira demo, originally run on a Titan at GTC 2013, and got it up and running on a Logan development board. Ira did need some work to make the transition to mobile. The skin shaders were simplified, smaller textures are used and the rendering resolution is dropped to 1080p. NVIDIA claims this demo was done in a 2 - 3W power envelope.

The next demo is called Island and was originally shown on a Fermi desktop part. Running on Logan/mobile Kepler, this demo shows OpenGL 4.3 and hardware tessellation working.

The development board does feature a large heatspreader, but that's not too unusual for early silicon just out of bring up. Logan's package size should be comparable to Tegra 4, although the die size will clearly be larger. The dev board is running Android and is connected to a 10.1-inch 1920 x 1200 touchscreen.

Power Consumption & Final Words
Comments Locked

141 Comments

View All Comments

  • Refuge - Thursday, July 25, 2013 - link

    Give it a tick and a tock and you will be surprised.

    Haswell is Intel's actual attempt at creating a mobile product that meets the expectations of having the Intel moniker with it.

    It is doing much better than Ivy did, and the graphics options are better, but the whole thing is still relatively young and juvenile. The next round I think we will see some very impressive results, like I keep telling people, the Atom of tomorrow isn't going to be the Atom of yestedays netbooks.
  • rwei - Wednesday, July 24, 2013 - link

    Serious question, why do mobile GPUs matter? I'm something of a declining gamer who probably last played a serious game around when ME3 came out, and I guess SC2:HotS briefly - and nothing on mobile platforms has excited me. On the other hand, I've accumulated a fat stack of games to play on consoles - the above, and Heavy Rain, Uncharted 3, The Last of Us - but I wouldn't actually play those on, say, a tablet (Heavy Rain maybe?), and even less so a phone.

    Infinity Blade was impressive for its time, but I would hardly buy a device to play it, and even in my reduced-passion state I still care more about games than most people.
  • randomhkkid - Wednesday, July 24, 2013 - link

    I think it will become more of a need as phones become the one device that does everything ie. when docked it becomes your desktop and then undocked its a smartphone. Check out the Ubuntu edge to see what I mean.
  • rwei - Wednesday, July 24, 2013 - link

    As things stand, I wouldn't even do that with an Ultrabook-class laptop, never mind a typical (non-Win8 convertible) tablet - and phones are still on a whole other plane entirely...!

    Particularly if high-DPI catches on (and I hope it does), my understanding is chips of this size won't have anywhere near the bandwidth to support that use case.
  • blacks329 - Wednesday, July 24, 2013 - link

    I had never thought of that but Heavy Rain on a tablet would actually be kind of awesome! Too bad that studio is Sony owned (ie only PS games) and the director is a pretentious douche. Nonetheless, they make interesting 'games' and I look forward to playing Beyond Two Souls.
  • Refuge - Thursday, July 25, 2013 - link

    There is always that slim chance it will pop up in the PlayStation store on some "Approved" HTC devices. I know my HTC One X+ got access to it because the Tegra 3 in it, but the selection is a joke if you ask me.
  • name99 - Wednesday, July 24, 2013 - link

    You're right --- the population that care about games is tiny, meaningless to Apple, Samsung, Nokia et al.

    The GPU is relevant on iOS, however, because the entire UI is built around "layers" which are essentially the backing store for every view (think controls like buttons, status bars, text windows, etc). These layers are composited together by the GPU to generate the final image you see. For this to give fluid scrolling, that compositing engine needs to be fast (and remember it is driving retina displays, so lots of pixels). Even today (and a lot more so in iOS) each of these views can be subject to frame by frame transformations (scaling, translation, becoming darker, lighter or more or less transparent) to provide the animations that one takes for granted in iOS, and once again we want those to run glitch free.

    All this stuff basically pushes the backend (texture) part of the GPU, not geometry and lighting. However something which DOES push geometry (I don't know about lighting) is Apple's flyover view in Maps. [Yeah, yeah, if you feel the need to make some adolescent comment about Apple Maps, please, for the love of god, go act like a child somewhere else.] The flyovers in Maps as of today (for the cities that have them) are, truth be told, PRETTY FREAKING INCREDIBLE. They combine the best features of Google Earth and StreetView, in a UI which is easier to use than either of those predecessors, and which runs a lot smoother than those predecessors. But I am guessing that the Maps 3D views pushes the GPU HW to its limits. They are smooth, yes, but they seem to do a careful job of limiting quality to keep smoothness going. There is no anti-aliasing in play, for example, and the tessellation of irregular objects (most obviously trees) is clearly too coarse. All of which means that if a GPU 2x as fast were available, Apple could make the 3D view in Maps look just that much better.

    Finally I suspect (without proof) that Apple also does a large amount of its rendering (ie the stroking and filling of paths, the construction of glyphs, and so on) on the GPU. They've wanted to do it that way for years on OSX, but were always hindered by bwd compatibility concerns. With the chance to start over on iOS, I'd expect they made sure this was a feasible path.
  • tviceman - Wednesday, July 24, 2013 - link

    It's nice to see Nvidia make comparisons to their own products. In this case, outperforming an 8800GTX puts things into good perspective when looking at anand's mobile GPU benchmarks.

    If Nvidia can deliver Logan "on time" then it truly will be a very, very great SoC. The biggest issue they'll still have to deal with is A15's power hungry design. Wayne's (Tegra 6) custom cores will hopefully be more power conscious like the Krait cores are.
  • xype - Wednesday, July 24, 2013 - link

    Oh, wow, I am sure this time around their outlandish performance claims will actually come true and Apple, Samsung, Qualcomm, et al will be totally outclassed.

    Especially since we all know companies like Apple—whose A6X the "sometime next year" is compared again, just decided to stop developing their mobile CPUs and will ship the next 4 iterations of each product with a A6X variant.
  • cdripper2 - Wednesday, July 24, 2013 - link

    Forgotten about Lucid Virtu have we? It would seem we SHOULD ignore your further posts ;)

Log in

Don't have an account? Sign up now