Automotive: DRIVE CX and DRIVE PX

While NVIDIA has been a GPU company throughout the entire history of the company, they will be the first to tell you that they know they can’t remain strictly a GPU company forever, and that they must diversify themselves if they are to survive over the long run. The result of this need has been a focus by NVIDIA over the last half-decade or so on offering a wider range of hardware and even software. Tegra SoCs in turn have been a big part of that plan so far, but NVIDIA of recent years has become increasingly discontent as a pure hardware provider, leading to the company branching out in unusual ways and not just focusing on selling hardware, but selling buyers on whole solutions or experiences. GRID, Gameworks, and NVIDIA’s Visual Computing Appliances have all be part of this branching out process.

Meanwhile with unabashed car enthusiast Jen-Hsun Huang at the helm of NVIDIA, it’s slightly less than coincidental that the company has also been branching out in to automotive technology as well. Though still an early field for NVIDIA, the company’s Tegra sales for automotive purposes have otherwise been a bright spot in the larger struggles Tegra has faced. And now amidst the backdrop of CES 2015 the company is taking their next step into automotive technology by expanding beyond just selling Tegras to automobile manufacturers, and into selling manufacturers complete automotive solutions. To this end, NVIDIA is announcing two new automotive platforms, NVIDIA DRIVE CX and DRIVE PX.

DRIVE CX is NVIDIA’s in-car computing platform, which is designed to power in-car entertainment, navigation, and instrument clusters. While it may seem a bit odd to use a mobile SoC for such an application, Tesla Motors has shown that this is more than viable.

With NVIDIA’s DRIVE CX, automotive OEMs have a Tegra X1 in a board that provides support for Bluetooth, modems, audio systems, cameras, and other interfaces needed to integrate such an SoC into a car. This makes it possible to drive up to 16.6MP of display resolution, which would be around two 4K displays or eight 1080p displays. However, each DRIVE CX module can only drive three displays. In press photos, it appears that this platform also has a fan which is likely necessary to enable Tegra X1 to run continuously at maximum performance without throttling.

NVIDIA showed off some examples of where DRIVE CX would improve over existing car computing systems in the form of advanced 3D rendering for navigation to better convey information, and 3D instrument clusters which are said to better match cars with premium design. Although the latter is a bit gimmicky, it does seem like DRIVE CX has a strong selling point in the form of providing an in-car computing platform with a large amount of compute while driving down the time and cost spent developing such a platform.

While DRIVE CX seems to be a logical application of a mobile SoC, DRIVE PX puts mobile SoCs in car autopilot applications. To do this, the DRIVE PX platform uses two Tegra X1 SoCs to support up to twelve cameras with aggregate bandwidth of 1300 megapixels per second. This means that it’s possible to have all twelve cameras capturing 1080p video at around 60 FPS or 720p video at 120 FPS. NVIDIA has also made most of the software stack needed for autopilot applications already, so there would be comparatively much less time and cost needed to implement features such as surround vision, auto-valet parking, and advanced driver assistance.

In the case of surround vision, DRIVE PX is said to deliver a better experience by improving stitching of video to reduce visual artifacts and compensate for varying lighting conditions.

The valet parking feature seems to build upon this surround vision system, as it uses cameras to build a 3D representation of the parking lot along with feature detection to drive through a garage looking for a valid parking spot (no handicap logo, parking lines present, etc) and then autonomously parks the car once a valid spot is found.

NVIDIA has also developed an auto-valet simulator system with five GTX 980 GPUs to make it possible for OEMs to rapidly develop self-parking algorithms.

The final feature of DRIVE PX, advanced driver assistance, is possibly the most computationally intensive out of all three of the previously discussed features. In order to deliver a truly useful driver assistance system, NVIDIA has leveraged neural network technologies which allow for object recognition with extremely high accuracy.

While we won’t dive into deep detail on how such neural networks work, in essence a neural network is composed of perceptrons, which are analogous to neurons. These perceptrons receive various inputs, then given certain stimulus levels for each input the perceptron returns a Boolean (true or false). By combining perceptrons to form a network, it becomes possible to teach a neural network to recognize objects in a useful manner. It’s also important to note that such neural networks are easily parallelized, which means that GPU performance can dramatically improve performance of such neural networks. For example, DRIVE PX would be able to detect if a traffic light is red, whether there is an ambulance with sirens on or off, whether a pedestrian is distracted or aware of traffic, and the content of various road signs. Such neural networks would also be able to detect such objects even if they are occluded by other objects, or if there are differing light conditions or viewpoints.

While honing such a system would take millions of test images to reach high accuracy levels, NVIDIA is leveraging Tesla in the cloud for training neural networks that are then loaded into DRIVE PX instead of local training. In addition, failed identifications are logged and uploaded to the cloud in order to further improve the neural network. Both of these updates can be done either over the air or at service time, which should mean that driver assistance will improve with time. It isn’t a far leap to see how such technology could also be leveraged in self-driving cars as well.

Overall, NVIDIA seems to be planning for the DRIVE platforms to be ready next quarter, and production systems to be ready for 2016. This should mean that it's possible for vehicles launching in 2016 to have some sort of DRIVE system present, although it's possible that it would take until 2017 to see this happen.

GPU Performance Benchmarks Final Words
Comments Locked

194 Comments

View All Comments

  • harrybadass - Monday, January 5, 2015 - link

    Nvidia X1 is somehow already obsolete when compared to A8x.

    GXA6850
    Clusters 8
    FP32 ALUs 256
    FP32 FLOPs/Clock 512
    FP16 FLOPs/Clock 1024
    Pixels/Clock (ROPs) 16
    Texels/Clock 16
  • psychobriggsy - Monday, January 5, 2015 - link

    NVIDIA are claiming power savings compared to the A8X, at the same performance level.

    And additionally, they can run the X1 GPU at ~1GHz to achieve greater performance than the A8X. However the A8X's lower GPU clock is just a design decision by Apple so they can guarantee battery life isn't sucky when playing games.

    But yet, hardware-wise the X1's GPU specification isn't that amazing when compared to the A8X's GPU.

    Last up, how does a quad-A57 at 2+ GHz compare to a dual 1.5GHz Cyclone...
  • techconc - Monday, January 5, 2015 - link

    Isn't always amazing how company A's future products compete so well against company B's current products? The X1 won't be competing with the A8X, it will be competing against the A9X. If you're familiar with the PowerVR Rogue 7 series GPUs, you'd wouldn't be terribly impressed with this recent nVidia announcement. It keeps them in the game as a competitor, but they will not be on top. Further, I'm quite certain that Apple's custom A9 chip will compare well to the off the shelf reference designs in the A57 in terms of performance, efficiency or both. If there were no benefits to Apple's custom design, they would simply use the reference designs as nVidia has chosen to do.
  • Yojimbo - Monday, January 5, 2015 - link

    Yes but how do you compare your product to something that isn't out yet? You can't test it against rumors. It must be compared with the best of what is out there and then one must judge if the margin of improvement over the existing product is impressive or not. The PowerVR Rogue 7 series is due to be in products when? I doubt it will be any time in 2015 (maybe I'm wrong). When I read the Anandtech article on the details of IMG's upcoming architecture a few months back I had a feeling they were trying to set themselves up as a takeover target. I don't remember exactly why but it just struck me that way. I wonder if anyone would want to risk taking them over while this NVIDIA patent suit is going on, however.
  • OreoCookie - Tuesday, January 6, 2015 - link

    The Tegra X1 isn't out yet either!
    If you look at Apple's product cycle it's clear that in the summer Apple will release an A9 when they launch the new iPhone. And you can look at Apple's history to estimate the increase in CPU and GPU horsepower.
  • Yojimbo - Tuesday, January 6, 2015 - link

    But NVIDIA HAS the Tegra X1. They are the ones making the comparisons and the Tegra X1 is the product which they are comparing! Apple seems to be releasing their phones in the fall recently, but NVIDIA nor the rest of the world outside Apple and their partners has no idea what the A9 is like and so it can't be used for a comparison! It's the same for everyone. When Qualcomm announced the Snapdragon 810 in April of 2014 they couldn't have compared it to the Tegra X1, even though that's what it will end up competing with for much of its life cycle.
  • Yojimbo - Monday, January 5, 2015 - link

    Perhaps those are the raw max-throughput numbers, but if it were that simple there would be no reason for benchmarks. Now let's see how they actually perform.
  • edzieba - Monday, January 5, 2015 - link

    12 cameras at 720p120?! VERY interested in DRIVE PX, even if it'd never end up near a car.
  • ihakh - Monday, January 5, 2015 - link

    about the intel chip I have to say that it is a very good CPU (think about sse and avx) + a little GPU
    but nvidia chip is a good GPU+ reasonable CPU

    you can have windows x86 on intel chip and run something like MATLAB (also android)
    and you can have a good gaming experience with nvidia's

    each of them has its use for certain users
    its not like that every program can use 1TFLOPS of tegra GPU
    and its not like every user is "game crazy"
    intel core M have its own users

    and of course tegra chip is very hot for mobiles and it is a hard decision for engineers who design mobiles and tablet to migrate from a known chip like snapdragon to an unknown and new chip like tegra

    I think both nvidia and intel are doing good and nor deserve blaming
    but it is a good idea for nvidia to make a cooler chip for mobiles
  • Morawka - Monday, January 5, 2015 - link

    So compared to the K1 it's twice as fast, And it also uses Twice as less energy.

    So does that mean it will still be a 7w SOC? albeit twice as fast.

Log in

Don't have an account? Sign up now