Ever since NVIDIA bowed out of the highly competitive (and high pressure) market for mobile ARM SoCs, there has been quite a bit of speculation over what would happen with NVIDIA’s SoC business. With the company enjoying a good degree of success with projects like the Drive system and Jetson, signs have pointed towards NVIDIA continuing their SoC efforts. But in what direction they would go remained a mystery, as the public roadmap ended with the current-generation Parker SoC. However we finally have an answer to that, and the answer is Xavier.

At NVIDIA’s GTC Europe 2016 conference this morning, the company has teased just a bit of information on the next generation Tegra SoC, which the company is calling Xavier (ed: in keeping with comic book codenames, this is Professor Xavier of the X-Men). Details on the chip are light – the chip won’t even sample until over a year from now – but NVIDIA has laid out just enough information to make it clear that the Tegra group has left mobile behind for good, and now the company is focused on high performance SoCs for cars and other devices further up the power/performance spectrum.

NVIDIA ARM SoCs
  Xavier Parker Erista (Tegra X1)
CPU 8x NVIDIA Custom ARM 2x NVIDIA Denver +
4x ARM Cortex-A57
4x ARM Cortex-A57 +
4x ARM Cortex-A53
GPU Volta, 512 CUDA Cores Pascal, 256 CUDA Cores Maxwell, 256 CUDA Cores
Memory ? LPDDR4, 128-bit Bus LPDDR3, 64-bit Bus
Video Processing 7680x4320 Encode & Decode 3840x2160p60 Decode
3840x2160p60 Encode
3840x2160p60 Decode
3840x2160p30 Encode
Transistors 7B ? ?
Manufacturing Process TSMC 16nm FinFET+ TSMC 16nm FinFET+ TSMC 20nm Planar

So what’s Xavier? In a nutshell, it’s the next generation of Tegra, done bigger and badder. NVIDIA is essentially aiming to capture much of the complete Drive PX 2 system’s computational power (2x SoC + 2x dGPU) on a single SoC. This SoC will have 7 billion transistors – about as many as a GP104 GPU – and will be built on TSMC’s 16nm FinFET+ process. (To put this in perspective, at GP104-like transistor density, we'd be looking at an SoC nearly 300mm2 big)

Under the hood NVIDIA has revealed just a bit of information of what to expect. The CPU will be composed of 8 custom ARM cores. The name “Denver” wasn’t used in this presentation, so at this point it’s anyone’s guess whether this is Denver 3 or another new design altogether. Meanwhile on the GPU side, we’ll be looking at a Volta-generation design with 512 CUDA Cores. Unfortunately we don’t know anything substantial about Volta at this time; the architecture was bumped further down NVIDIA’s previous roadmaps for Pascal, and as Pascal just launched in the last few months, NVIDIA hasn’t said anything further about it.

Meanwhile NVIDIA’s performance expectations for Xavier are significant. As mentioned before, the company wants to condense much of Drive PX 2 into a single chip.  With Xavier, NVIDIA wants to get to 20 Deep Learning Tera-Ops (DL TOPS), which is a metric for measuring 8-bit Integer operations. 20 DL TOPS happens to be what Drive PX 2 can hit, and about 43% of what NVIDIA’s flagship Tesla P40 can offer in a 250W card. And perhaps more surprising still, NVIDIA wants to do this all at 20W, or 1 DL TOPS-per-watt, which is one-quarter of the power consumption of Drive PX 2, a lofty goal given that this is based on the same 16nm process as Pascal and all of the Drive PX 2’s various processors.

NVIDIA’s envisioned application for Xavier, as you might expect, is focused on further ramping up their automotive business. They are pitching Xavier as an “AI Supercomputer” in relation to its planned high INT8 performance, which in turn is a key component of fast neural network inferencing. What NVIDIA is essentially proposing then is a beast of an inference processor, one that unlike their Tesla discrete GPUs can function on a stand-alone basis. Coupled with this will be some new computer vision hardware to feed Xavier, including a pair of 8K video processors and what NVIDIA is calling a “new computer vision accelerator.”

Wrapping things up, as we mentioned before, Xavier is a far future product for NVIDIA. While the company is teasing it today, the SoC won’t begin sampling until Q4 of 2017, and that in turn implies that volume shipments won’t even be until 2018. But with that said, with their new focus on the automotive market, NVIDIA has shifted from an industry of agile competitors and cut-throat competition, to one where their customers would like as much of a heads up as possible. So these kinds of early announcements are likely going to become par for the course for NVIDIA.

POST A COMMENT

36 Comments

View All Comments

  • TheinsanegamerN - Thursday, October 06, 2016 - link

    unfortunately that leaves us with 0 good, powerful tablets. Samsung seems content to put weak GPUs in their newest TAB models, and nobody else will use anything other then bottom barrel mid range chips, and dont dare to make a product over $150. Nobody wants to make a 10 inch $400 tablet with a snapdragon 820 and microSD support it seems.

    i'm glad I grabbed a shield tablet when I did. Looks like there will not be a good non $500 pixel tablet for quite some time (and the pixel tablet doesnt have microSD, so its a bit of a non starter).
    Reply
  • name99 - Wednesday, September 28, 2016 - link

    There are two ways designers seem able to track the path they need to follow: you can start at high performance and then try to maintain that while reducing energy every iteration. That's basically been the Intel path. Or you can start at low energy and try to maintain that while improving performance (the Apple/ARM path).

    Maybe we don't have enough data to draw strong conclusions, but it is notable that Apple and ARM have done OK following their track, while Intel and nV have not. Both have managed to stand still, to protect their existing markets, but they have not managed to grow the market substantially in the way that Apple and ARM have.

    Trying to grow downward seems fundamentally more problematic than trying to grow upward. I don't know if that's because of business psychology (management is scared that cheaper chips will steal high end sales, so they cripple the chips so much as to be useless), or technological (it's just a more complicated problem to strip the energy out of a complicated fast design, than to add performance to a low-energy design [being very careful to make sure that everything you add does not add extra energy costs]).
    Reply
  • doggface - Thursday, September 29, 2016 - link

    I think if anything you could see Apple as very intel-like in their development. Where as the majority of arm licensees go for many small cores to save costs, Apple have followed Intel in having big wide cores with large memory caches to prioritise single threaded IPC.

    Further if you look at Intel's current offerings they are offering "cores formerly known as Core-M" at 4-5w that are not architecturally dissimilar to their big 100-150 watt cores.

    The only real difference is that Apple's cores were designed from the get go to be mobile first, while intel originally did not have those design constraints and focused on MOAR PWR and have taken 3-4 generations to get the power draw issues resolved while at the same time not sacrificing performance drastically. (not really moving forward far either.. But that is a different issue)
    Reply
  • doggface - Thursday, September 29, 2016 - link

    Or to put it another way... The only real difference between Apple and Intel in my book is WHEN they entered the market, different expectations were in place. Reply
  • TheinsanegamerN - Thursday, October 06, 2016 - link

    Tell me about it. The shield tablet is the last reasonable android machine. The rest of the $150-200 crowd are poorly made, have vastly inferior SoCs, ece. Samsung is putting too-small of GPUs in their new TABs, and google is only pushing the ridiculously prices pixel C with no micro SD.

    They are just handing the market to apple.
    Reply
  • Huber - Sunday, October 09, 2016 - link

    So i, guessing this means no more NVIDIA SHIELD tablets and android tv devices? if so, shame, those were the best in the market. Reply

Log in

Don't have an account? Sign up now