Automotive: DRIVE CX and DRIVE PX

While NVIDIA has been a GPU company throughout the entire history of the company, they will be the first to tell you that they know they can’t remain strictly a GPU company forever, and that they must diversify themselves if they are to survive over the long run. The result of this need has been a focus by NVIDIA over the last half-decade or so on offering a wider range of hardware and even software. Tegra SoCs in turn have been a big part of that plan so far, but NVIDIA of recent years has become increasingly discontent as a pure hardware provider, leading to the company branching out in unusual ways and not just focusing on selling hardware, but selling buyers on whole solutions or experiences. GRID, Gameworks, and NVIDIA’s Visual Computing Appliances have all be part of this branching out process.

Meanwhile with unabashed car enthusiast Jen-Hsun Huang at the helm of NVIDIA, it’s slightly less than coincidental that the company has also been branching out in to automotive technology as well. Though still an early field for NVIDIA, the company’s Tegra sales for automotive purposes have otherwise been a bright spot in the larger struggles Tegra has faced. And now amidst the backdrop of CES 2015 the company is taking their next step into automotive technology by expanding beyond just selling Tegras to automobile manufacturers, and into selling manufacturers complete automotive solutions. To this end, NVIDIA is announcing two new automotive platforms, NVIDIA DRIVE CX and DRIVE PX.

DRIVE CX is NVIDIA’s in-car computing platform, which is designed to power in-car entertainment, navigation, and instrument clusters. While it may seem a bit odd to use a mobile SoC for such an application, Tesla Motors has shown that this is more than viable.

With NVIDIA’s DRIVE CX, automotive OEMs have a Tegra X1 in a board that provides support for Bluetooth, modems, audio systems, cameras, and other interfaces needed to integrate such an SoC into a car. This makes it possible to drive up to 16.6MP of display resolution, which would be around two 4K displays or eight 1080p displays. However, each DRIVE CX module can only drive three displays. In press photos, it appears that this platform also has a fan which is likely necessary to enable Tegra X1 to run continuously at maximum performance without throttling.

NVIDIA showed off some examples of where DRIVE CX would improve over existing car computing systems in the form of advanced 3D rendering for navigation to better convey information, and 3D instrument clusters which are said to better match cars with premium design. Although the latter is a bit gimmicky, it does seem like DRIVE CX has a strong selling point in the form of providing an in-car computing platform with a large amount of compute while driving down the time and cost spent developing such a platform.

While DRIVE CX seems to be a logical application of a mobile SoC, DRIVE PX puts mobile SoCs in car autopilot applications. To do this, the DRIVE PX platform uses two Tegra X1 SoCs to support up to twelve cameras with aggregate bandwidth of 1300 megapixels per second. This means that it’s possible to have all twelve cameras capturing 1080p video at around 60 FPS or 720p video at 120 FPS. NVIDIA has also made most of the software stack needed for autopilot applications already, so there would be comparatively much less time and cost needed to implement features such as surround vision, auto-valet parking, and advanced driver assistance.

In the case of surround vision, DRIVE PX is said to deliver a better experience by improving stitching of video to reduce visual artifacts and compensate for varying lighting conditions.

The valet parking feature seems to build upon this surround vision system, as it uses cameras to build a 3D representation of the parking lot along with feature detection to drive through a garage looking for a valid parking spot (no handicap logo, parking lines present, etc) and then autonomously parks the car once a valid spot is found.

NVIDIA has also developed an auto-valet simulator system with five GTX 980 GPUs to make it possible for OEMs to rapidly develop self-parking algorithms.

The final feature of DRIVE PX, advanced driver assistance, is possibly the most computationally intensive out of all three of the previously discussed features. In order to deliver a truly useful driver assistance system, NVIDIA has leveraged neural network technologies which allow for object recognition with extremely high accuracy.

While we won’t dive into deep detail on how such neural networks work, in essence a neural network is composed of perceptrons, which are analogous to neurons. These perceptrons receive various inputs, then given certain stimulus levels for each input the perceptron returns a Boolean (true or false). By combining perceptrons to form a network, it becomes possible to teach a neural network to recognize objects in a useful manner. It’s also important to note that such neural networks are easily parallelized, which means that GPU performance can dramatically improve performance of such neural networks. For example, DRIVE PX would be able to detect if a traffic light is red, whether there is an ambulance with sirens on or off, whether a pedestrian is distracted or aware of traffic, and the content of various road signs. Such neural networks would also be able to detect such objects even if they are occluded by other objects, or if there are differing light conditions or viewpoints.

While honing such a system would take millions of test images to reach high accuracy levels, NVIDIA is leveraging Tesla in the cloud for training neural networks that are then loaded into DRIVE PX instead of local training. In addition, failed identifications are logged and uploaded to the cloud in order to further improve the neural network. Both of these updates can be done either over the air or at service time, which should mean that driver assistance will improve with time. It isn’t a far leap to see how such technology could also be leveraged in self-driving cars as well.

Overall, NVIDIA seems to be planning for the DRIVE platforms to be ready next quarter, and production systems to be ready for 2016. This should mean that it's possible for vehicles launching in 2016 to have some sort of DRIVE system present, although it's possible that it would take until 2017 to see this happen.

GPU Performance Benchmarks Final Words
Comments Locked

194 Comments

View All Comments

  • Mayuyu - Monday, January 5, 2015 - link

    Apple should start licensing Nvidia GPUs instead of Imagination GPUs for next generation iDevices.
  • twotwotwo - Monday, January 5, 2015 - link

    It might be hard (or impossible) for them to do that without breaking compatibility with existing iOS games written around the PowerVR's quirks.
  • Krysto - Monday, January 5, 2015 - link

    The OpenGL stuff shouldn't be "impossible". Even the texture compression. I think developers can deal with that. Where Apple really shot itself in the foot is with the launch of the Metal API, though. Now they're stuck with Imagination for at least a few more years until they make it more abstract to work with multiple GPU architectures and not so..."metal". Or they can wait for OpenGL NG to appear, which will probably take just as much time.
  • techconc - Monday, January 5, 2015 - link

    How exactly did Apple "shoot itself in the foot" with Metal. They have a solution right now for mobile apps that rivals what is possible on other platforms. All the major game engines have already migrated to Metal. nVidia can show these generic OpenGL benchmarks all they want, but in practice, graphic intensive apps on the A7 and A8 series chips are seeing far greater efficiency and performance improvements.
    OpenGL NG sounds great in concept, but it takes forever for a consortium like Khronos to develop new standards and just as long for them to eventually be adopted. This is years away from becoming a reality. Yet, Apple gets all of the benefits of that right now. From my perspective, this gives Apple a strong competitive advantage.
  • akdj - Sunday, January 11, 2015 - link

    Well said techconc
    Not sure if you're in to development, SoC design or just a 'user', Krysto...but BOTH Apple's 'Metal' and language 'Swift' were/are HUGE leaps forward to 'cut' the peanut butter layer on the GPU that is Open GL ES ...so developers have 'direct' access to the 'metal' AKA GPU portion of the SoC. It's an amazing feat in 'software engineering' that helped a huge load on the 'hardware engineering' side of the house....specifically because of this!
    I own a Note 4 for my business
    I own a 6+ as a personal driver.
    The former a quad core, 2.7x Ghz procs and the Adreno 420 and 3GB of 'shared' SoC RAM
    The latter, a dual core, 1.5Ghz procs with IT's solution for graphics and 1GB of 'shared' SoC RAM
    I love them both, different reasons BUT, Play Asphalt 8 on both. Then tell me 'more muscle, power, RAM, cores or core speed' are the reasons I'm playing a more fluent game on iOS vs android
    I'm ambidextrous and enjoy using both. Same in the office or home environment. OS X is primary but I've always had a Windows box since the big 'switch' a decade ago
    Point being, software is damn near, and sometimes MORE important than hardware to the end user's experience. No one outside of us dorks, geeks, and pocket protector wearing Homers has a clue what FitFat, latency, core clock speed, or hell....cores for that matter MEAN! They couldn't tell Ya if theyre rocking 1, 2, 3 GB of RAM or NO RAM, lol.
    The ultimate end experience is designed and defined by the software and hardware working in synergy WITH a development community willing to step up and develop a million optomized apps for your system. If it's running iOS or Android, you're in luck. Windows, a bit tougher to 'win' and if this SoC does indeed have the power/TDP numbers they're bragging, Apple's never been one to change supply chains
    There's a reason Tim is CEO, & that's the biggest. When you're dropping 100,000 products a year, you HAVE to have suppliers that can fulfill your orders and needs
  • adriaaaaan - Thursday, January 15, 2015 - link

    Are you honestly expecting a phone with a weaker GPU pushing 50% more pixels to out perform the other? Of course the iPhone is smoother in games its lower res than the note 4
  • Maleficum - Wednesday, January 21, 2015 - link

    Oh yeah, Note4 has to push more pixels than 6+. However, a resolution that high is simply not necessary in first place, and more importantly, over 30% of the pixel data the SoC has to process are nullified by the pentiled AMOLED. What a waste!
  • Maxjonny55 - Saturday, June 20, 2015 - link

    Metal has made easier for to access the GPU and the reason apple had done this due the lack of power on there CPUs compared to android device, yes sure GPUs can run apps with extra power but then so what? Open GL has always been doing that! More will know Java and Open GL and easier for development as all hardware vendor apart from apple will optimise hardware for it.

    I would not want to compare Asphalt 8 between devices as horse power and muscle has nothing to do with it but lazy work on the part of game creators.

    Providing access to metal will make a difference to apps no doubt but some apps and not all.. Open GL provides access to GPU not sure why it took apple so long. I have a Nexus 9 and iPad Air 2 and can't see apart from Apple hype what the Air 2 had to offer in performance! Nexus 9 single core out performes the Air 2 and so does the 1 year older GPU..
  • Wolfpup - Wednesday, September 30, 2015 - link

    "lack of power"? Apple's CPUs blow away any other ARM CPUs.
  • Maleficum - Wednesday, January 21, 2015 - link

    OpenGL is FAR outdated. It has way too many performance bottlenecks due to the aged design, and doesn't scale very well with modern GPU/CPU architectures.
    Both MS and Apple recognized this, et voila, Metal and the upcoming DX12 are their answers.

    Pity that Android can't keep up with this, stuck with the opensource mess.

Log in

Don't have an account? Sign up now