Automotive: DRIVE CX and DRIVE PX

While NVIDIA has been a GPU company throughout the entire history of the company, they will be the first to tell you that they know they can’t remain strictly a GPU company forever, and that they must diversify themselves if they are to survive over the long run. The result of this need has been a focus by NVIDIA over the last half-decade or so on offering a wider range of hardware and even software. Tegra SoCs in turn have been a big part of that plan so far, but NVIDIA of recent years has become increasingly discontent as a pure hardware provider, leading to the company branching out in unusual ways and not just focusing on selling hardware, but selling buyers on whole solutions or experiences. GRID, Gameworks, and NVIDIA’s Visual Computing Appliances have all be part of this branching out process.

Meanwhile with unabashed car enthusiast Jen-Hsun Huang at the helm of NVIDIA, it’s slightly less than coincidental that the company has also been branching out in to automotive technology as well. Though still an early field for NVIDIA, the company’s Tegra sales for automotive purposes have otherwise been a bright spot in the larger struggles Tegra has faced. And now amidst the backdrop of CES 2015 the company is taking their next step into automotive technology by expanding beyond just selling Tegras to automobile manufacturers, and into selling manufacturers complete automotive solutions. To this end, NVIDIA is announcing two new automotive platforms, NVIDIA DRIVE CX and DRIVE PX.

DRIVE CX is NVIDIA’s in-car computing platform, which is designed to power in-car entertainment, navigation, and instrument clusters. While it may seem a bit odd to use a mobile SoC for such an application, Tesla Motors has shown that this is more than viable.

With NVIDIA’s DRIVE CX, automotive OEMs have a Tegra X1 in a board that provides support for Bluetooth, modems, audio systems, cameras, and other interfaces needed to integrate such an SoC into a car. This makes it possible to drive up to 16.6MP of display resolution, which would be around two 4K displays or eight 1080p displays. However, each DRIVE CX module can only drive three displays. In press photos, it appears that this platform also has a fan which is likely necessary to enable Tegra X1 to run continuously at maximum performance without throttling.

NVIDIA showed off some examples of where DRIVE CX would improve over existing car computing systems in the form of advanced 3D rendering for navigation to better convey information, and 3D instrument clusters which are said to better match cars with premium design. Although the latter is a bit gimmicky, it does seem like DRIVE CX has a strong selling point in the form of providing an in-car computing platform with a large amount of compute while driving down the time and cost spent developing such a platform.

While DRIVE CX seems to be a logical application of a mobile SoC, DRIVE PX puts mobile SoCs in car autopilot applications. To do this, the DRIVE PX platform uses two Tegra X1 SoCs to support up to twelve cameras with aggregate bandwidth of 1300 megapixels per second. This means that it’s possible to have all twelve cameras capturing 1080p video at around 60 FPS or 720p video at 120 FPS. NVIDIA has also made most of the software stack needed for autopilot applications already, so there would be comparatively much less time and cost needed to implement features such as surround vision, auto-valet parking, and advanced driver assistance.

In the case of surround vision, DRIVE PX is said to deliver a better experience by improving stitching of video to reduce visual artifacts and compensate for varying lighting conditions.

The valet parking feature seems to build upon this surround vision system, as it uses cameras to build a 3D representation of the parking lot along with feature detection to drive through a garage looking for a valid parking spot (no handicap logo, parking lines present, etc) and then autonomously parks the car once a valid spot is found.

NVIDIA has also developed an auto-valet simulator system with five GTX 980 GPUs to make it possible for OEMs to rapidly develop self-parking algorithms.

The final feature of DRIVE PX, advanced driver assistance, is possibly the most computationally intensive out of all three of the previously discussed features. In order to deliver a truly useful driver assistance system, NVIDIA has leveraged neural network technologies which allow for object recognition with extremely high accuracy.

While we won’t dive into deep detail on how such neural networks work, in essence a neural network is composed of perceptrons, which are analogous to neurons. These perceptrons receive various inputs, then given certain stimulus levels for each input the perceptron returns a Boolean (true or false). By combining perceptrons to form a network, it becomes possible to teach a neural network to recognize objects in a useful manner. It’s also important to note that such neural networks are easily parallelized, which means that GPU performance can dramatically improve performance of such neural networks. For example, DRIVE PX would be able to detect if a traffic light is red, whether there is an ambulance with sirens on or off, whether a pedestrian is distracted or aware of traffic, and the content of various road signs. Such neural networks would also be able to detect such objects even if they are occluded by other objects, or if there are differing light conditions or viewpoints.

While honing such a system would take millions of test images to reach high accuracy levels, NVIDIA is leveraging Tesla in the cloud for training neural networks that are then loaded into DRIVE PX instead of local training. In addition, failed identifications are logged and uploaded to the cloud in order to further improve the neural network. Both of these updates can be done either over the air or at service time, which should mean that driver assistance will improve with time. It isn’t a far leap to see how such technology could also be leveraged in self-driving cars as well.

Overall, NVIDIA seems to be planning for the DRIVE platforms to be ready next quarter, and production systems to be ready for 2016. This should mean that it's possible for vehicles launching in 2016 to have some sort of DRIVE system present, although it's possible that it would take until 2017 to see this happen.

GPU Performance Benchmarks Final Words
Comments Locked

194 Comments

View All Comments

  • Jumangi - Monday, January 5, 2015 - link

    Apple would never use Nvidia at the power consumption levels it brings. The power is pointless to them if it can't be put into a smartphone level device. Nvidia still doesn't get why nobody in the OEM market wants their tech for a phone.
  • Yojimbo - Monday, January 5, 2015 - link

    But the NVIDIA SOCs are on a less advanced process node, so how can you know that? You seem to be missing the whole point. The point is not what Apple wants or doesn't want. The point is to compare NVIDIA's GPU architecture to the PowerVR series 6XT GPU. You cannot directly compare the merits of the underlying architecture by comparing performance and power efficiency when the implementations are using different sized transistors. And the question is not the level of performance and power efficiency Apple was looking for for their A8. The question is simply peak performance per watt for each architecture.
  • OreoCookie - Tuesday, January 6, 2015 - link

    @Yojimbo
    The Shield was released with the Cortex A15-based Tegra K1, not the Denver-based K1. The former is not competitive with regards to CPU performance, the latter plays in the same league. AFAIK the first Denver-based K1 product was the Nexus 9. Does anyone know of any tablets which use the Denver-based K1?
  • lucam - Wednesday, January 7, 2015 - link

    Apple sell products that have an year life cycle, don't sell chips and therefore they don't need to do any marketing in advance as NV does punctually at any CES.
  • TheJian - Monday, January 5, 2015 - link

    It's going finfet 16nm later this year (parker). As noted here it's NOT in this chip due to time to market and probably not as much gained by shrinking that to 20nm vs. going straight to 16nm finfet anyway. Even Qcom went off the shelf for S810 again for time to market.

    Not sure how you get that Denver is a disappointment. It just came out...LOL. It's a drop in replacement for anyone using K1 32bit (pin compatible), so I'm guessing we'll see many more devices pop up quicker than the first rev, but even then it will have a short life due to X1 and what is coming H2 with Denver yet again (or an improved version).

    What do you mean K1 is in ONE device? You're kidding right? Jeez, just go to amazon punch Nvidia K1 into the search. Acer, HP, NV shield, Lenovo, Jetson, Nexus9, Xiaomi (mipad not sold on amazon but you get the point)...The first 4 socs were just to get us to desktop gpu. The real competition is just starting.

    Building the cpu wasn't just for mobile either. You can now go after desktops/higher end notebooks etc with NO WINTEL crap in them and all the regular PC trimmings (high psu, huge fan/heatsink, hd's, ssd's etc etc, discrete gpu if desired, 16-32GB of ram etc). All of this timed perfectly with 64bit OS getting polished up for MUCH more complicated apps etc. The same thing that happened to low-end notebooks with chromebooks, will now happen with low end PC's at worst and surely more later as apps advance on android etc and Socs move further up the food chain in power and start running desktop models at 4ghz with fan/heatsinks (with a choice of discrete gpu when desired). With no Wintel Fee (copy of windows + Intel cpu pricing), they will be great for getting poor people into great gaming systems that do most of what they'd want otherwise (internet, email, docs, media consumption). I hope they move here ASAP, as AMD is no longer competition for Intel CPU wise.

    Bring on the ARM full PC like box! Denver was originally supposed to be x86 anyway LOL. Clearly they want in on Intel/AMD cpu territory and why not at CPU vs. SOC pricing? NV could sell an amped up SOC at 4ghz for $110/$150 vs. Intel's top end i5/i7's ($229/339). A very powerful machine for $200 less cash but roughly ~perf (when taking out the Windows fee also, probably save $200 roughly). Most people in this group won't miss the windows apps (many won't even know what windows is, grew up on a phone/tablet etc). Developing nations will love these as apps like Adobe Suite (fully featured) etc get moved making these cheap boxes powerful content creators and potent gamers (duh, NV gpu in them). If they catch on in places like USA also, Wintel has an even bigger headache and will need to drop pricing to compete with ARM and all it's ecosystem brings. Good times ahead in the next few years for consumers everywhere. This box could potentially run android, linux, steamos, chrome in a quadboot giving massive software options etc at a great price for the hardware. Software for 64bit on Arm will just keep growing yearly (games and adv apps).
  • pSupaNova - Tuesday, January 6, 2015 - link

    Agree totally with your post, NVdia did try to put good mobile chips in netbooks with the ION & ION2 and Intel blocked them.

    Good to see that they have stuck at the job and now are in the position to starting eating Intels lunch.
  • darkich - Monday, January 5, 2015 - link

    That's just not true.

    The K1 has shipped in three high end Android Tablets - Nvidia shield, Xiaomi MiPad, and Nexus 9.

    Now, how many tablets got a Snapdragon 805?
    Exynos 5433?

    Tegra K1 market performance is simply the result of the fact that high end tablet market is taken up by Apple, and that it doesn't compete in mod range and low end.
  • darkich - Monday, January 5, 2015 - link

    *mid range
  • GC2:CS - Monday, January 5, 2015 - link

    It's the result of too high power compustion, that OEM's prefer to keep low.

    That's why tegra K1 is used by just foolish chinesse manufacteurs (like tegra 4 in a phone) like xiaomi, google in a desperate need for a non Apple high end 64-bit chip (to showcase how much it's 64-bit) and nvidia themselves.
  • Yojimbo - Monday, January 5, 2015 - link

    I think you're right that the K1 is geared more towards performance than other SOCs. The K1 does show good performance/watt, but it does so with higher performance, using more watts. And you're right that most OEMs have preferred a lower power usage. But it doesn't mean that the K1 is a poor SOC. NVIDIA is trying to work towards increasing the functionality of the platform by allowing it to be a gaming platform. That is their market strategy. It is probably partially their strategy because those are the tools they have available to them; that is their bread-and-butter. But presumably they also think mobile devices can really be made into a viable gaming platform. Thinking about it in the abstract it seems to be obvious... Mobile devices should at some point become gaming platforms. NVIDIA is trying to make this happen now.

Log in

Don't have an account? Sign up now