NVIDIA Announces Jetson TX2: Parker Comes To NVIDIA’s Embedded System Kit
by Ryan Smith on March 7, 2017 9:00 PM EST- Posted in
- SoCs
- NVIDIA
- Tegra
- Machine Learning
- Tegra Parker
- Jetson
For a few years now, NVIDIA has been offering their line of Jetson embedded system kits. Originally launched using Tegra K1 in 2014, the first Jetson was designed to be a dev kit for groups looking to build their own Tegra-based devices from scratch. Instead, what NVIDIA surprisingly found, was that groups would use the Jetson board as-is instead and build their devices around that. This unexpected market led NVIDIA to pivot a bit on what Jetson would be, resulting in the second-generation Jetson TX1, a proper embedded system board that can be used for both development purposes and production devices.
This relaunched Jetson came at an interesting time for NVIDIA, which was right when their fortunes in neural networking/deep learning took off in earnest. Though the Jetson TX1 and underlying Tegra X1 SoC lack the power needed for high-performance use cases – these are after all based on an SoC designed for mobile applications – they have enough power for lower-performance inferencing. As a result, the Jetson TX1 has become an important part of NVIDIA’s neural networking triad, offering their GPU architecture and its various benefits for devices doing inferencing at the “edge” of a system.
Now about a year and a half after the launch of the Jetson TX1, NVIDIA is going to be giving the Jetson platform a significant update in the form of the Jetson TX2. This updated Jetson is not as radical a change as the TX1 before it was – NVIDIA seems to have found a good place in terms of form factor and the platform’s core feature set – but NVIDIA is looking to take what worked with TX1 and further ramp up the performance of the platform.
The big change here is the upgrade to NVIDIA’s newest-generation Parker SoC. While Parker never made it into third-party mobile designs, NVIDIA has been leveraging it internally for the Drive system and other projects, and now it will finally become the heart of the Jetson platform as well. Relative to the Tegra X1 in the previous Jetson, Parker is a bigger and better version of the SoC. The GPU architecture is upgraded to NVIDIA’s latest-generation Pascal architecture, and on the CPU side NVIDIA adds a pair of Denver 2 CPU cores to the existing quad-core Cortex-A57 cluster. Equally important, Parker finally goes back to a 128-bit memory bus, greatly boosting the memory bandwidth available to the SoC. The resulting SoC is fabbed on TSMC’s 16nm FinFET process, giving NVIDIA a much-welcomed improvement in power efficiency.
Paired with Parker on the Jetson TX2 as supporting hardware is 8GB of LPDDR4-3733 DRAM, a 32GB eMMC flash module, a 2x2 802.11ac + Bluetooth wireless radio, and a Gigabit Ethernet controller. The resulting board is still 50mm x 87mm in size, with NVIDIA intending it to be drop-in compatible with Jetson TX1.
Given these upgrades to the core hardware, unsurprisingly NVIDIA’s primary marketing angle with the Jetson TX2 is on its performance relative to the TX1. In a bit of a departure from the TX1, NVIDIA is canonizing two performance modes on the TX2: Max-Q and Max-P. Max-Q is the company’s name for TX2’s energy efficiency mode; at 7.5W, this mode clocks the Parker SoC for efficiency over performance – essentially placing it right before the bend in the power/performance curve – with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter's max performance mode.
Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clockspeeds aren't double TX1's, it's going to be a bit more sensitive on an application-by-application basis.
NVIDIA Jetson TX2 Performance Modes | |||||
Max-Q | Max-P | Max Clocks | |||
GPU Frequency | 854MHz | 1122MHz | 1302MHz | ||
Cortex-A57 Frequency | 1.2GHz | Stand-Alone: 2GHz w/Denver: 1.4GHz |
2GHz+ | ||
Denver 2 Frequency | N/A | Stand-Alone: 2GHz w/A57: 1.4GHz |
2GHz | ||
TDP | 7.5W | 15W | N/A |
In terms of clockspeeds, NVIDIA has disclosed that in Max-Q mode, the GPU is clocked at 854MHz while the Cortex-A57 cluster is at 1.2GHz. Going to Max-P increases the GPU clockspeed further to 1122MHz, and allows for multiple CPU options; either the Cortex-A57 cluster or Denver 2 cluster can be run at 2GHz, or both can be run at 1.4GHz. Though when it comes to all-out performance, even Max-P mode is below the TX2's limits; the GPU clock can top out at just over 1300MHz and CPU clocks can reach 2GHz or better. Power states are configurable, so customers can dial in the TDPs and desired clockspeeds they want, however NVIDIA notes that using the maximum clocks goes further outside of the Parker SoC’s efficiency range.
Finally, along with announcing the Jetson TX2 module itself, NVIDIA is also announcing a Jetson TX2 development kit. The dev kit will actually ship first – it ships next week in the US and Europe, with other regions in April – and contains a TX2 module along with a carrier board to provide I/O breakout and interfaces to various features such as the USB, HDMI, and Ethernet. Judging from the pictures NVIDIA has sent over, the TX2 carrier board is very similar (if not identical) to the TX1 carrier board, so like the TX2 itself is should be familiar to existing Jetson developers.
With the dev kit leading the charge for Jetson TX2, NVIDIA will be selling it for $599 retail/$299 education, the same price the Jetson TX1 dev kit launched at back in 2015. Meanwhile the stand-alone Jetson TX2 module will be arriving in Q2’17, priced at $399 in 1K unit quantities. In the case of the module, this means prices have gone up a bit since the last generation; the TX2 is hitting the market at $100 higher than where the TX1 launched.
Source: NVIDIA
59 Comments
View All Comments
ddriver - Wednesday, March 8, 2017 - link
Yeah, today goldman sachs tells people to buy nvidia, to anyone who is not evil or retarded that means "do not buy nvidia".Granted, nvidia has the tools, and the libraries, but that's just bait to lock in the lazies. Not everyone is lazy and talentless, not everyone needs to be held by the hand like a little baby.
I already have enough money to not really even care or think about money. That doesn't mean I outta be wasting it on overpriced, poor value stuff that is not worth it. Everyone is stupid, that's true. I am stupid too. Just less stupid than most. I am smart enough to know what I am stupid about. Unlike you ;)
jospoortvliet - Thursday, March 9, 2017 - link
Dude. Take your meds.TheJian - Friday, March 10, 2017 - link
Let me know when AMD has 8yrs of R&D and a few billion stuck in OpenCL development. They can't even properly launch a cpu (see reviews, games don't work right, SMT screwed, boards not even ready etc) or gpu (see last gen). If AMD doesn't actually start MAKING money at some point they're screwed. They have lost $8B+ in the last 15-20yrs. That's not good right? They've laid off 30% of their engineers in the last ~5yrs. They've been restructuring for 5yrs. The "tools and the libraries" are what you pay the extra dough for. Cuda works because they stuck a decade of development and cash into it. It's taught in 450 universities across a few dozen countries.The point of the tools etc is smaller guys can get in. The point of using something like unreal engine is a smaller guy can make a credible game. You don't seem to get the point of all this stuff. not everybody has the time or money to develop an end to end solution (even larger companies buy qualcomm etc to get the modem and all in one for mobile etc) so part of the value of a device like this (or drive px etc) is all that you get on top of the device.
10yrs ago I would not have thought about game dev. It would have taken 10yrs to make a crappy game. Today on multiple engines (take your pick) I can make something worth paying for in a few years alone if desired. If you think that guy doing this is lazy or talentless you're dumber than you think ;) Sorry you're stupid. I'm ignorant about some stuff (cars, couldn't care less about learning them), but because I choose to be. But I'm not stupid. Comic you mention the stock, I'm guessing it will be $125-150 in the next year (under $100 today - $20 off in the last month). Auto's will at some point make them money on socs (and I think they'll re-enter mobile at 10nm or below as modem's can be added without watt costs etc), and AI/Big data will get them the rest of the way. Record revenue, margins, income will keep happening. Next rev of cards will probably be able to be priced another $50 across the board because Vega won't likely be able to do much against either Nvidia's new lineup of revved up boards with faster mem (GDDR5x on almost everything shortly and faster clocks across the lineup on top), or if that isn't enough we'll probably see 12nm Voltas for xmas if needed or at least Q1. Worst case NV just lowers prices until they put out the winner again just like Intel would do. Unlike AMD, both of their enemies can fight a price war for a year and win it next year. AMD will do better vs Intel (should get some expensive server chip sales) than nvidia. Intel has been paying so much attention to racing down to ARM they forgot about AMD even being alive. Nvidia hasn't done that and likely has multiple ways to play this out without a change in market share or much margin loss. Unlike Intel they never sat on their laurels. They've forged ahead and even taken the smarter/cheaper (GDDR5x) and much easier to produce route. HBM2 like HBM will be a problem for AMD going alone. If NV was in it maybe it wouldn't be expensive (NV could put what AMD is pulling), but alone they'll be killing profits just like last time and already late again just like last time giving NV more room to make adjustments.
It's comic AMD's slides compare HBM2 to GDDR5. That isn't what the competition will be using. They're going to be top to bottom GDDR5x shortly except for the bottom card. NV has the next 3 months to sell 1080ti and capitalize on top pricing then be able to cut if needed and not lose much having already milked the cow. Unfortunately for AMD, HBM2 held them up yet again (just like the first rev, not to mention will probably limit supply again just like HBM1). Benchmarks have shown Vega beating 1080 by 10%. Unfortunately it's now facing 1080ti (running like Titan) due to HBM2 just hitting production and delaying Vega. Lastly Raja said the driver team is almost entirely dedicated to Vulkan now:
"I only have a finite number of engineers so they’re just focused on Vulkan."
That means DX11 people, OpenGL will be left wanting. So even if Vega ends up 20-30% faster than 1080 in what they like (vulkan/dx12?), 1080ti will likely at worst tie it in most stuff and if needed they can always put out a card with 30 sm units on instead of 28 right (just a p6000 at that point right? Surely there are a few extras lying around)? Surely they have cherry picked enough titan chips by now that fully work "just in case" they're needed. I see them constantly talking 4k which again is ignoring the fact that 95% of us are running 1920x1200 or lower. Who cares if you win where nobody plays? They seem to be forgetting a full 50% of the market is running win7 and dx11 too. I won't likely be going win10 unless forced (2020? ROFL). There aren't enough games coming this year to make me want win10/dx12 and vulkan will run in win7. But I don't see a ton of Vulkan patches coming so far for older current games. Things could change but we'll see. I'd rather bet on what I can WIN today, not what you hope might happen one day. How long have people waited for bulldozer to be a winner? How long will it take for ZEN to get fixed on gaming? Will it ever? Since AMD themselves said already looking for gains on Zen2. PCper thinks games will look the same on Ryzen for good (so no faith in fixes in their opinion based on AMD talk).
Looks like we'll get two cards with about the same bandwidth, etc, but with NV having the dough to make drivers for all apis not just vulkan. Not doubting AMD will have great hardware, its the drivers that will likely keep them down. Raja himself said they're completely focused on Vulkan (so ignoring DX12, Dx11, OpenGL for now? Perhaps DX12 good enough?). Not a happy camper when AMD comes right out and says basically both products are short on R&D money. Now that we've seen 1080ti (just reading reviews...LOL). Board partners will make it even faster. Hope AMD can make enough vega to at least pull down some cash with it (meaning HBM2 limited again probably).
LostInSF - Wednesday, March 8, 2017 - link
$15 to buy a brand new Nvidia Parker SOC? Are you serious? If you have the source, I'll buy thousands of Nvidia Parker from you. LOLddriver - Wednesday, March 8, 2017 - link
15$ to make it. Production cost, doh!. What it outta cost is 50$. Put the money on the table, and I will sell you as much from stuff that costs me 15$ for 50$ as you want.LostInSF - Thursday, March 9, 2017 - link
LOL! production cost? iphone 7 production cost is <$200, but sold at $700. Go and accuse every company why you all don't sell at your production cost?TheJian - Wednesday, March 8, 2017 - link
Even with your prices it's $100 so 10x would be $1000. Also, tell your story to Intel who was losing 4.1B a year on mobile. Nvidia same story just less and selling far less. BTW, the soc likely costs more than $30 to make now maybe quite a bit more since they make nowhere near say apple:http://news.ihsmarkit.com/sites/ihs.newshq.busines...
Apple's A10 is $27. I'm fairly certain NV's chips are above this since they are not likely making 50-100mil of them. In a document of a teardown of Xiaomi Mi 3 the soc cost was $27 and that was a LONG time ago. They are growing in size. IE, the upcoming Volta soc is expected to be 300mm. That isn't cheap to make. Consider AMD's gpu for ps4/xbox1 is not much bigger and costs 90-105 to make and they sell them for $100-110 (ps4/xbox1 respectively) upon release when they said they had single digit margins (now supposedly at mid-teens, which I take to mean not more than 15% and the Q reports and sales of both units back this up). The Volta chip is expected to be 7B transistors. Barely incremental updates? Even mighty Intel was losing 4.1B a year...LOL. A 14nm 165w Broadwell (24 core IIRC) has 7.2B transistors so you should be able to see the complexity here. For perspective the GTX 1080's die size is 314mm^2 and also has 7.2B transistors. So the new Volta Soc is about as big as GTX 1080's die. You don't just have to R&D it either, you have to pay to tape it out etc.
https://semico.com/content/soc-silicon-and-softwar...
A $20 soc design is required to ship 10mil units just to break even on older tech. Getting more expensive now with coming 10nm. Not to mention upwards trend of software costs to go with it (69% cagr per shrink from 28 down to 7nm!). I don't think they're talking chips the size of Nvidia's either (nor a samsung/apple). More likely some chinese crap.
"Total SoC design costs increased 89% from the 28nm node to the 14nm node and are expected to increase 32% again at the 10nm node and 45% at the 7nm node."
They mention the number of low cost crap keeping Avg costs down. But that isn't Apple/Nvidia/Samsung's top socs. To date, Nvidia hasn't made a dime on their socs. That probably won't happen until the soc segment reaches 1B revenue. Last I checked they are FAR from that (about 500mil/yr). Since they sell software with the hardware (total solution for cars) it might be less than a billion needed to break even now but they still have yet to make money in this segment, so you're really not making sense here.
http://electroiq.com/petes-posts/2015/01/26/expone...
Another showing costs blowing up.
"McGregor said the tapeout costs to do a single device are very high. “You have to put $100 million into a semiconductor startup today to be able to get to productization.” This means that big companies will be getting bigger. “There will still be some small companies – but I think the mid-sized company in our industry, in devices, is going to dramatically go away because of the scale and other things required,” he said.
This crap isn't cheap. You should get the point. It might not be several billion, but they do spend about 1.5B a year in R&D now. They make a new IP and spread it over pro/datacenter/auto/gamer etc. If it wasn't for the others making money, the socs would have died by now. Auto etc is looking promising at some point but this is like spending 8yrs on Cuda to get it entrenched (and billions). I wish they'd make a 500w pc like box that would accept their discrete gpus (I'd hoped this would be rev2 of shieldTV but that hasn't come yet, maybe rev3?). I think they'd do much better offering a box that is far closer or even better (with discrete gpu) than xbox1/ps4 etc. It can't be that hard to put it in a much larger box and strap a heastink/fan on it and run it at 50-100w.
You don't just do IP and then churn out cheap design after cheap design. EACH design costs a bunch of money to make and support (software) before you even get it into a product. IE, in the article above the guy mentions a company needing 100mil to get to productization. Hence the smaller companies dying soon or never even starting up. He also mentions they get pretty much nothing from the software and now have far more software engineers than hardware which ends up just being the gift wrap around the chips as he says. It's not as cheap as you're making things out to be. Intel couldn't make a DIME on mobile for years (4B+ losses yearly until they gave it up).
16GB of DDR4 for that i7 is $85-100 alone, never mind the chip price etc...LOL. You seem confused about the price that Intel device would be selling for to make back all the design costs of everything in it and make profit. Jetson TX2 here isn't a raspberry pi. :) You aren't the target market, so get over it. Go buy a raspberry pi or get a better job so you can afford better toys...ROFL. This is being sold to car dealers/universities etc.
renz496 - Wednesday, March 8, 2017 - link
dev board have always been more expensive. why did you try to compare it with regular PC part that we can buy separately? Qualcomm dev board for snapdragon 820 (Open-Q 820) pretty much cost the same:https://shop.intrinsyc.com/products/open-q-820-dev...
A5 - Wednesday, March 8, 2017 - link
Yeah, dev boards are always expensive. I've used reference FPGA boards that cost several times more than this.BrokenCrayons - Wednesday, March 8, 2017 - link
For an embedded development kit, the price is pretty reasonable. Besides that, these things are likely to sell to institutional and corporate buyers rather than individual tinkerers. Those organizations won't flinch at the price.