You can find our news post on the new Drive PX Pegasus for Level 5 autonomous vehicles when it goes live at https://www.anandtech.com/show/11913

03:44AM EDT - Stay tuned for coverage of NVIDIA's Keynote at GTC Europe. Heading up the presentation is NVIDIA CEO, Jensen Huang. GTC Europe is now NVIDIA's key automotive event due to its location in Germany, so expect some automotive announcements and relationship disclosures with major car manufacturers.

03:45AM EDT - We're sat down for the keynote, starts in 15

03:48AM EDT - I'm sitting second row and center. NVIDIA have put on special tables for press - no more laptop sliding on from my lap!

03:49AM EDT - Last year's event in Amsterdam was the first GTC Europe, had 1400 attendees

03:49AM EDT - This year we're in Munich, at the hub of European automobile country, and NVIDIA has over 3000 attendees

03:50AM EDT - Lots of talks this week about how companies and developers are using NVIDIA hardware to shape their focus

03:51AM EDT - Again, we're in the heart of European car country. We were able to confirm that this event is basically going to be NVIDIA's main automotive event each year

03:51AM EDT - They've booked this venue for the next two years at least

03:51AM EDT - On the show floor are a few NVIDIA Drive automobiles, and they have a test track outside to show current progress as well as a nod to autonomous motorsport

03:52AM EDT - First up though is the Keynote, with CEO Jensun Huang set to talk for two hours on new technologies and partnerships

03:52AM EDT - I highly expect a german car manufacturer representative (or three) to appear on stage at some point as well

03:53AM EDT - We've been pre-briefed on a few of the announcements today, and the full embargo for those will lift at noon Germany time

03:54AM EDT - Until then we'll have to say what we see

04:01AM EDT - Here we go, intro video

04:01AM EDT - Uses for AI, showcasing some partnerships

04:02AM EDT - Tomtom, Paccar (long haul)

04:02AM EDT - 'I am a protector, a learner, a creator' etc

04:02AM EDT - Using AI to create art and symphonies

04:02AM EDT - 'I am AI, brought to life by NVIDIA'

04:03AM EDT - Jensen Huang (JSH) to the stage

04:03AM EDT - 'Enabling breakthroughs in science around the world is what GTC is all about'

04:04AM EDT - End of Moore's Law is propelling the future of computing

04:04AM EDT - Transistors are scaling over 40 years, but single threaded performance is slowing

04:05AM EDT - >This is the usual 'GPUs are the future, not CPUs' intro we've heard a few times before

04:06AM EDT - The CPU market used to get higher performance with the same software, or more features each generation, but recently that has been slowing

04:06AM EDT - Sub threshhold leakage due to smaller transistors made it impossible for them to deliver the expected performance increase without increasing power dramatically

04:07AM EDT - Using GPUs, the cloud is smart about grouping and collating photos. The same technology allows us to self drive cars or find cancers

04:08AM EDT - This technology has been around for many years, but it requires high performance to train these algorithms

04:09AM EDT - Deep Neural Networks need mountains of data to train the neurons and synapses

04:10AM EDT - The emergence of GPUs, and the development of new neural network techniques, have turbocharged this revolution

04:10AM EDT - GPU accelerated computing is all NVIDIA does. Invested $30b in the pursuit in accelerating GPU accelerated computing

04:11AM EDT - In 15 years, the deficit due to the failing of Moore's Law can be 1000x in performance. This is the promise of GPU computing, because we do it in a fundamentally different way

04:11AM EDT - We rely on large numbers of transistors rather than fast transistors

04:12AM EDT - >I'm wondering when we'll see multi-die GPUs

04:12AM EDT - >NVIDIA will need a technology similar to Intel's EMIB to grow another order of magnitude. Perhaps

04:13AM EDT - >Showing Volta, V100

04:13AM EDT - No way to build a CPU with 21 billion transistors. That's not true with GPU compute

04:14AM EDT - 120 TFLOPS is 120 CPUs. One Volta replaces one rack of CPUs

04:14AM EDT - The emergence of AI where software writes itself by data torture, along with high performance GPUs, turbocharged AI computing

04:15AM EDT - GTC attendance has increased 10x in 5 years, CUDA devs increased 15x in 5 years, CUDA downloads 5x in 5 years

04:15AM EDT - NVIDIA has reached critical mass with CUDA developers. Half of all CUDA downloads were in the last 12 months

04:16AM EDT - Computer graphics is actually a rare application: a high volume compute medium for video games

04:16AM EDT - High computational intensity and extremely high volumes came together to give us an R&D budget to fund the advancement of GPUs

04:17AM EDT - Very few other segments require both extreme compute and extreme scale like this

04:17AM EDT - The advancement of gaming hardware has improved computational science in ways never thought possible before

04:18AM EDT - FEA, Climate, Seismic, Fluid Dynamics, Molecular Dynamics, Astrophysics, Quantum Chemistry, Medical Imaging

04:18AM EDT - Now Deep Learning

04:18AM EDT - 'Why can't you compute a company database like a video game'

04:19AM EDT - >There's a DGX-1 on stage off to the side. Probably for later

04:19AM EDT - Database analytics companies are starting to emerge in AI-based data science

04:20AM EDT - Instead of requiring a supercomputer, companies can accelerate neural networks with much less hardware, or scale out

04:22AM EDT - The number of GPU accelerated industries are growing, with trillions of dollars involved in these industries

04:22AM EDT - The 2017 Nobel Prizes in Physics and Chemistry used NVIDIA GPUs

04:22AM EDT - Cryogenic Electron Microscopy (Chemistry) and Detection of Gravitational Waves (Physics)

04:24AM EDT - Just about every modern instrument, from nano-scale to universe scale, has a GPU in it for compute

04:25AM EDT - Now discussing Holodeck

04:26AM EDT - When you go into a holodeck environment, you want to interact with the environment and obey the laws of physics

04:26AM EDT - When you touch something, you should feel it

04:26AM EDT - Photoreleastic models, Virtual Collaboration, Integrated AI and Physically Simulated Interactions

04:27AM EDT - Now for a live demo

04:27AM EDT - >They are doing Holodeck demos at the show. I've got some time with it in the schedule

04:27AM EDT - These people are in different locations but in the same VR environment

04:28AM EDT - Everyone sees the same objects, at 90 FPS

04:28AM EDT - Brand new McLaren model straight from CAD in the environment

04:29AM EDT - Not trying to build a video game, but building the lab of the future

04:29AM EDT - AI could end up handing you tools

04:29AM EDT - Or smooths over the final aspects based on company details

04:30AM EDT - Use the environment to train AI: train robotic arms for manufacturing etc

04:30AM EDT - Using NVIDIA Issac

04:30AM EDT - The idea is that this is a collaboration tool

04:30AM EDT - Use a geometry clipping tool to see inside the product

04:31AM EDT - 30k objects in the holodeck

04:31AM EDT - Examine each part individually

04:31AM EDT - So three people in this environment available to work on the object in real time

04:32AM EDT - With haptics, collision detection is possible (we assume the haptics designer is not NVIDIA)

04:32AM EDT - Every location will need a copy of the blueprints, one person will act as host for updates. All done peer-to-peer (so far)

04:33AM EDT - Move CAD designs from CATIA/NX/Creo/Alias to Maya/3dsMAX, then import into Holodeck

04:33AM EDT - This is a plugin architecture, the aim is to support almost every CAD package

04:34AM EDT - Early Access is now available

04:34AM EDT - Requires GTX 1080 Ti / Titan XP / Quadro 6000 as a minimum spec. Only 1 GPU

04:34AM EDT - Now more into AI

04:35AM EDT - 3000 papers published on Deep Learning per year

04:35AM EDT - 'If 1 in 100 researchers has publishable data, that's a ton of researchers'

04:36AM EDT - >I think Jensen doesn't realize that 1 in 100 is a crazy number. 1 in 1.5 maybe - it's publish or perish. Most researchers only submit something that's usually worth publishing, it's very self selective

04:37AM EDT - If natural language and interaction is cracked, there are incredible opportunties

04:37AM EDT - Time to show a few cases of AI

04:38AM EDT - Teaching an artificial network to finish a ray-tracing job to fill in the noise

04:39AM EDT - Ray-tracing becomes a solvable problem previously hindered by extreme compute requirements

04:39AM EDT - Or using AI to touch up old photos by putting in the right colors

04:40AM EDT - Now Audio-driven facial animation. Going from text directly to facial animation by training an AI

04:40AM EDT - Now pose estimation

04:41AM EDT - Inferring a pose in 3D space, teaching an AI to generate a 3D pose from a 2D video

04:42AM EDT - Character Animation: Teaching an AI to navigate a topology with the required animation

04:42AM EDT - Learning how to climb, tip toe, navigate stones

04:43AM EDT - >The key here is the neural network training. We know that game following algorithms are never that great

04:43AM EDT - One shot imitation learning: teaching a robot to stack bricks with only a couple of examples in advance

04:44AM EDT - 1900 Deep Learning startups based on NVIDIA's AI Platform

04:44AM EDT - around 1/3 are in IT Services

04:45AM EDT - >so it's interesting that 'IT Services' related demos were not included in a presentation

04:45AM EDT - NVIDIA AI is in every major cloud and data center

04:46AM EDT - Two weeks ago, all of our cloud partners have made Volta available to their customers

04:46AM EDT - Startups no longer need to build their own Volta supercomputer - they can rent one to develop and then invest when it is time to scale

04:47AM EDT - Using one architecture, develop and scale on NVIDIA all over the world

04:47AM EDT - So after developing the AI (training), you need to run the AI (inference)

04:49AM EDT - Three classes of inference: cloud for big data, IoT devices for small inference, then mid range autonomy that leverages both local and cloud compute

04:50AM EDT - That mid range of implementation is increasingly the most important and interesting segments

04:51AM EDT - Drones, automotive, manufacturing etc

04:51AM EDT - 'There aren't enough truck drivers in the world'

04:53AM EDT - The number of types of devices we can support is exploding, as are the networks we support

04:53AM EDT - One neural network generating real and fake data for another neural network trying to pick apart the real from the fake

04:54AM EDT - Reinforcement neural networks is how we train robotics

04:56AM EDT - Language Translation: JSH expects real time translation in only a few years

04:57AM EDT - Now covering TensorRT3, the ability to compile and optimize neural networks based on any framework for any hardware

04:57AM EDT - Take the same network and optimize for 300W or for 300mW

04:58AM EDT - Kernel Auto-tuning and optimization

05:00AM EDT - 50 layer ResNet runs at 300 images/sec on V100 + TensorFlow. Run through TensorRT and performance moves to 5700 images/sec

05:00AM EDT - Takes advantage of Tensor Cores etc

05:01AM EDT - Reducing latency

05:01AM EDT - NVIDIA claiming 10x better data center TCO

05:02AM EDT - Replacing 160 CPU servers capable of 45k images/second at 65KW down to 4 servers: 1/6th cost, 1/20th power

05:03AM EDT - 1 HGX with 8 Tesla V100 GPUs: 3KW, 45k images/sec

05:03AM EDT - $100k for NVIDIA vs $600k for CPUs

05:05AM EDT - example: Inference on CPU does 4.8 images/sec identifying flowers

05:05AM EDT - Compute time approx 200ms on the CPU

05:06AM EDT - 'Imagine applying this technology to medical imaging, to robotics, to automotive'

05:07AM EDT - The problem is 4.8 images/sec

05:07AM EDT - Using TensorRT on one GPU. 500 images/sec

05:08AM EDT - 10ms compute time

05:08AM EDT - 100x Speedup.

05:08AM EDT - >Also 100x cost, of course. But lower power

05:09AM EDT - Now inference on speech

05:10AM EDT - Training voice recognition to be applied to other AIs

05:10AM EDT - Watch any movie and search for words

05:10AM EDT - Watches a film in super real time

05:11AM EDT - Talking to an AI that will detect what you say, then search the episode to find where that phrase was said and instantly move to that scene

05:12AM EDT - Now Vincent AI, from Cambridge Consultants

05:12AM EDT - A trained artist for collaboration

05:13AM EDT - Uses adversarial network technologies working together / against each other to generate megapixel images based on certain artists

05:14AM EDT - The idea is showing these adversarial networks only a few sets of images and compete against each other to generate magnitudes of data and refine the accuracy

05:14AM EDT - Rather than use mountains of data, only use a small amount of data

05:15AM EDT - If you start to draw a scene in a certain style, it will try and determine what the style is and refine the coarseness for that style

05:17AM EDT - Different networks will see the same image in different styles

05:18AM EDT - NVIDIA is on the path to solve the most important tasks needing AI

05:19AM EDT - Now autonomous vehicles

05:19AM EDT - Use a single cab that can be used in an autonomous flying vehicle or road-based vehicle via autonomy

05:20AM EDT - NVIDIA Drive

05:22AM EDT - These cars are going to enable us to not pay attention but still drive safely

05:22AM EDT - Even if the computer fails, there has to be backups

05:23AM EDT - Solved by redundancy and diversity: solving a problem in multiple ways

05:24AM EDT - Movie time

05:24AM EDT - Deep Learning Perception

05:25AM EDT - Neural Networks can detect objects from any angle in any orientation

05:27AM EDT - 'We created the stack so we can learn how the industry works'

05:27AM EDT - Customers can use all or some of our platform

05:28AM EDT - The companies working on autonomous transportation services are worldwide

05:28AM EDT - Working out how to drive without a driver - having a backup computer that can complete the task if something fails

05:30AM EDT - Level 5 is several orders of magnitude harder than Level 2

05:30AM EDT - more computing, more sensors, redundant sensors, redundant computing, reacting to the environment even during failover

05:31AM EDT - Implementing 'Fail-Operate' for Level 5

05:31AM EDT - Current state-of-the-art driverless vehicles have a trunk full of hardware

05:31AM EDT - Several thousand watts of compute needed in these devices

05:32AM EDT - Now Announcing Drive PX Pegasus - a robo-taxi for level 5 autonomy

05:32AM EDT - 320 TOPS with Tensor Core

05:33AM EDT - ASIL-D, 500W

05:33AM EDT - Late Q1 for early access partners

05:33AM EDT - Using two SoCs and two GPUs

05:34AM EDT - Also announcing the NVIDIA Drive IX SDK

05:35AM EDT - Sensing inside and outside the vehicle using pretrained networks

05:36AM EDT - Detecting the driver outside the vehicle, automatic personalization, inattentive driver alert, cyclist alert, distracted driver alert, driver/passenger recogition

05:36AM EDT - Detects who you are and applies your custom settings for seats/ratio/lighting/throttle response before you enter the vehicle

05:36AM EDT - Early Access Q4, runs on the Drive platform

05:38AM EDT - The full scope from training to driving to AV stack to 3D simulation to real-time testing via an open platform for partners

05:40AM EDT - Using 3D Simulations to train neural networks : simulate millions of miles driven in VR with new weather effects and difficulties. Confirm with real-time testing

05:40AM EDT - Resimulate 300k miles in 5 hours - redrive every paved road in the US in 2 days

05:42AM EDT - Retrain and resim with new vectors

05:42AM EDT - resimulate in hyper-real-time

05:43AM EDT - Training a network on so many routes all at once

05:44AM EDT - 'An entire platform end-to-end'

05:44AM EDT - 145 startups on NVIDIA Drive

05:45AM EDT - Final subject for the presentation

05:45AM EDT - Autonomous vehicles are about collision avoidance. Autonomous machines are all about interaction

05:46AM EDT - Mentioning Xavier

05:46AM EDT - They are only just taping out Xavier, waiting on silicon to come back

05:46AM EDT - Project Issac to train AI Robots

05:47AM EDT - Simulate the robot in VR and train it, then put the network into a real world robot

05:48AM EDT - AI Robot learning to hockey without writing any code

05:48AM EDT - Train the AI in super-real-time and scale out

05:49AM EDT - Test multiple robots at once, then take the smartest robot in a fixed time and start the training from the new point

05:49AM EDT - > This isn't a new concept. It's just being applied to robots in a simulated world to then be applied in the real world

05:50AM EDT - 'Autonomous machines are the next exciting element in our industry'

05:50AM EDT - Project Issac to be launched in the near future

05:51AM EDT - Time for a roundup. 2 hours is a long press event

05:51AM EDT - Headline is the announcement of Drive PX Pegasus. Our news on this will go out when the embargo lifts in about 10 minutes - we got a lot more info in our pre-briefing

05:52AM EDT - Also announced was the Drive IX SDK

05:52AM EDT - Strange that none of the major automotive partners were put on stage

05:53AM EDT - That's a wrap! Time for a press Q&A and some lunch.

POST A COMMENT

6 Comments

View All Comments

  • Manch - Tuesday, October 10, 2017 - link

    Wish work would have let me attend this :| Reply
  • thesenate - Tuesday, October 10, 2017 - link

    Is the GPU in Drive Pegasus based on GV100? Or is it based on some unannounced variant of Volta, like GV102? Reply
  • Ryan Smith - Tuesday, October 10, 2017 - link

    Neither. Post-Volta. Reply
  • edzieba - Tuesday, October 10, 2017 - link

    "> This isn't a new concept. It's just being applied to robots in a simulated world to then be applied in the real world"
    Using simulation evolutionary training is not a new concept either. Doing it for a more basic robot (the classic two-wheels-and-some-ultrasonic-sensors swarm demobot) was a first-year task a decade ago at uni.

    Like most of the 'Deep Learning' field, this is all stuff that's very well trod fro ma research standpoint, but instead of a revolution in understanding to make it viable commercially, it's instead having monumental computational power thrown at it. Like computing in general, AI of today isn't 'smarter' than research AI of a decade ago, it's just very stupid very fast.
    Reply
  • Yojimbo - Tuesday, October 10, 2017 - link

    Hmm, I'm not so sure that's fair to say. One could make a similar argument that the progress in electronics throughout much of the 20th century was stuff that was all well-trod from a research standpoint, as it relied on 19th century physics; in the 20th century they just had the technology to throw at the theory. By the time they had to worry about quantum effects, well, the revolutionary theoretical work there had been completed decades beforehand, as it was done in the early 20th century. Or one could say that the Manhattan Project was not revolutionary because it was simply the application of the theories of nuclear reactions that had already been worked out.

    The basic algorithms of deep learning aren't new, that's true. But the engineering of building these networks efficiently and of effectively interconnecting various networks to create a powerful system of networks seems to be happening now. For instance, AlphaGo played Lee Sidol in a go match in 2016 and played Ke Jie in another match in 2017. Go experts agree that the 2017 version was a lot stronger than the 2016 version. But the 2016 version was running on much more powerful hardware than the 2017 version. Yes, there had been more time for training, but someone from Deep Mind, I believe it was David Silver, said that the big difference that let them decrease the hardware necessary to run AlphaGo was the tuning of their networks.

    Coming out with a safe, working self-driving vehicle seems to take a lot more than having enough data and the processing power to crunch it by using only things that were known 30 years ago. It seems to take quite a bit of engineering and additional know-how, just like it did to actually build a working nuclear bomb. There's a lot of problem-solving left to be done. After all, the Nazis had all the same theories as well as experts in them, yet they didn't have a working nuclear bomb by the end of the war.
    Reply
  • liarajames - Monday, October 16, 2017 - link

    That's a huge post. Well, thank you so much for providing such a detailed information here. I would like to share my blog here. https://doracheats.com Reply

Log in

Don't have an account? Sign up now