The best way to learn is by doing, and to help you get started, we have assembled a series of tutorials and instructional materials featuring the latest developer innovations.
Register for upcoming automotive developer webinars to learn more about the NVIDIA DRIVE platform.
In each hour-long session, NVIDIA experts will dive into the details of various aspects of the end-to-end AV computational pipeline and will be available for live Q&A.
Real-Time Sensor Recording and Replay Tools with the NVIDIA DriveWorks SDK
Wednesday, November 4, 2020, 9:00 AM PDT | 6:00 PM PDT
Every autonomous vehicle software development pipeline requires the ability to record and replay high quality sensor data in support of use cases such as DNN training, algorithm development and debugging. Learn about how the NVIDIA DriveWorks SDK provides several tools and APIs to allow developers to quickly get started with recording and replaying data, freeing up valuable development time.
Catchup on the past DRIVE webinars recording at your own convenience!
|Autonomous Driving at Scale: Architect and Deploy Object Detection Inference Pipelines||DRIVE Infrastructure||
NVIDIA and system integrator Tata Consultancy Services (TCS) company experts cover the data annotation pipeline in the autonomous vehicle training addressing the sensitivity to variations in data distribution, data type, annotation requirements, quality parameters, the volume of data, and the effectiveness of automation critical to achieving optimal annotation accuracy and speed.
|Developing Intelligent In-Cabin Experience Using DRIVE IX||DRIVE IX||
The webinar covers the architecture of the NVIDIA DRIVE IX open platform and how to build intelligent occupant-centric applications, as well as future capabilities coming down the pipeline. With the modular design of DRIVE IX, developers can use a wide array of functionalities and try out different technologies for the same feature using plugins. We walk through sample code that demonstrates how to build intelligent applications using APIs from the DRIVE IX platform.
|NVIDIA DRIVE Infrastructure – The Complete Datacenter Infrastructure to Build Autonomous Vehicles||
This webinar introduces NVIDIA’s Infrastructure for building and maintaining autonomous vehicles. It includes techniques for managing the lifecycle of deep learning models, from definition, training and deployment to reloading and life-long learning. It also covers the powerful new NVIDIA DRIVE™ Constellation AV simulator, which is enabling the industry to safely drive billions of qualified miles in virtual reality. This two-server simulation platform makes it possible to test an autonomous vehicle in a near-infinite variety of conditions and scenarios—before it even reaches the road. This webinar introduces the large-scale deployment of this validation infrastructure and the ecosystem around it.
|Planning and Control Architecture and Implementation for Autonomous Vehicles||DRIVE Planning||
An autonomous vehicle's planning and control stack is responsible for generating a safe and comfortable plan to drive the car. In this webinar, we review the general architecture of the NVIDIA DRIVE™ Planning and Control stack and show examples of how the stack drives the car in different scenarios. We'll discuss an example of a module API and explore methods to test and benchmark performance of planning and control functions.
|Perception and Mapping Architecture for Autonomous Vehicles||
Perception, mapping and localization are critical for robust autonomous vehicle planning and control. In this webinar, we’ll walk through the NVIDIA DRIVE™ Perception and NVIDIA DRIVE™ Mapping general architectures. We also show how the stack provides a perception signal and how that perception signal is used to build a map and localize the ego-car.
|Building AI-Enabled AV Applications with NVIDIA DRIVE Software||DRIVE Software||
The open and scalable NVIDIA DRIVE™ Software includes DriveWorks and DRIVE AV SDKs, which provide the building blocks for developers to implement highly optimized autonomous vehicle software applications that leverage the computing power of the NVIDIA DRIVE AGX platform. In this webinar, we walk through DriveWorks and DRIVE AV design principles, the algorithms included in each release, our approach to leveraging the power of the NVIDIA Xavier SoC, as well as how to develop optimized applications.
|Introducing NVIDIA DRIVE OS, the Functional Safety Operating System for Autonomous Vehicle||DRIVE OS||
NVIDIA DRIVE™ OS is the foundation of the NVIDIA DRIVE™ Software stack. In this presentation, we introduce DRIVE OS for Safety, the first functional safety operating system designed specifically for accelerated computing and artificial intelligence. Certifying an open platform like DRIVE OS is a monumental undertaking, involving ground-breaking processes, tools, methodologies and technologies, fine-tuned to provide a rich set of features, cybersecurity and a performant real-time architecture.
In this webinar, you learn how we approached this vital task and how to implement functional safety in the autonomous vehicle development process.
|NVIDIA DRIVE AGX Solutions for Scalable Autonomous Vehicle Development||DRIVE AGX||
NVIDIA DRIVE™ AGX is an open, scalable architecture for autonomous driving capabilities, from NCAP through robotaxi. NVIDIA has developed a unique suite of SoC, GPU, and Smart Network computational and acceleration options for flexible autonomous vehicle development. This session provides details on our latest SoC, GPU, and Smart Network products and how they can be used in a vehicle computer architecture. We also go over DRIVE AGX Hyperion sensor solutions for both passenger cars and commercial trucks.
|Integrating DNN Inference into Autonomous Vehicle Applications with NVIDIA DriveWorks SDK||DriveWorks||
In this webinar, we’ll cover the steps to perform inference on a pretrained network with DriveWorks. We’ll first review DriveWorks basics before exploring the DriveWorks DNN APIs and tools to convert, optimize and run inference. Finally we walk through sample code that demonstrates how to integrate your DNN into your software pipeline.
|Part 3: Using CUDA Kernel Concurrency and GPU Application Profiling for Optimizing Inference on DRIVE AGX||DRIVE AGX||
Concurrent execution of multiple GPU inferencing tasks provides potential performance optimization when compared to its serialized counterpart. As a real-world use case, we implement a multi-network inference pipeline for object detection and lane segmentation. In building this application, we show how to achieve kernel concurrency using multiple CUDA Streams and CUDA Graphs. We then introduce how to use NVIDIA NSight Systems to profile the application, showing the performance gains from implementing concurrency.
|Part 2: Extending NVIDIA TensorRT with custom layers using CUDA||DRIVE AGX||
The second installment of this webinar series explains how to extend TensorRT with custom operations, running custom layers through TensorRT using the plugin interface. For the fastest implementation of custom layers, it is necessary to use the same GPU by building CUDA kernels on which the optimized engine will run. We present the CUDA kernel implementations with optimizations. We then cover TensorRT plugins and how to adapt CUDA kernel as a part of the TensorRT plugin for DNN model optimization with a sample application.
|Part 1: Optimizing DNN inference using CUDA and TensorRT on DRIVE AGX||DRIVE AGX||
In this webinar, we introduce CUDA cores, threads, blocks, gird, and stream and the TensorRT workflow. We also cover CUDA memory management and TensorRT optimization, and how you can deploy optimized deep learning networks using TensorRT samples on NVIDIA DRIVE AGX.
|Developing a Camera Pipeline Using NVIDIA DriveWorks||DriveWorks||
This webinar covers the steps to develop camera image processing software on the DriveWorks SDK. Using this platform, developers can implement a range of capabilities seamlessly and with high performance. This webinar includes DriveWorks image basics, low-level Computer Vision modules, and Feature Tracking and DNN samples.
|Integrating Custom Sensors Using NVIDIA DriveWorks||DriveWorks||
This webinar covers how to implement and use the sensor plugins for different sensor types such as radar, lidar, and camera. Such plugins will make it possible for developers to bring new sensors into the DriveWorks SAL and to implement the transport and protocol layers necessary to communicate with the sensor.
GPU Technology Conference (GTC) highlights the latest breakthroughs in autonomous vehicles, AI, HPC, accelerated data science, healthcare, graphics, and more. Explore the extensive catalog of recorded presentations on the future of self-driving technology through GTC On Demand.
Catch up on the top-rated GTC 2020 automotive sessions:
- Inside NVIDIA's AI Infrastructure for Self-Driving Cars
- NVIDIA DRIVE Labs: An Inside Look at Autonomous Vehicle Software
- Optimizing TensorRt Conversion for Real-Time Inference On Autonomous Vehicles
- Panoptic Segmentation DNN for Autonomous Vehicles
- PredictionNet: Predicting the Future in Multi-Agent Environments for Autonomous Vehicle Applications
- Sensor Processing with the NVIDIA DriveWorks SDK: Abstraction, Algorithms, and Acceleration
Peek under the hood of NVIDIA DRIVE Software with our latest video series.
Deep Learning Institute (DLI)
In this workshop, you will learn how to design, train, and deploy deep neural networks for autonomous vehicles using the NVIDIA DRIVE™ AGX Development platform. Learn how to:
- Integrate sensor input using the DriveWorks software stack
- Train a semantic segmentation neural network
- Optimize, validate, and deploy a trained neural network using TensorRT
Upon completion, participants will be able to create and optimize perception components for autonomous vehicles using NVIDIA DRIVE™.
- Prerequisites: Experience with CNNs
- Frameworks: TensorFlow, DIGITS, TensorRT
- Languages: English, Chinese, Japanese
In this 6 months nanodegree program, you will build the skills and learn the techniques used by self-driving car teams at the most advanced technology companies in the world. Learn how to:
- Apply computer vision and deep learning to automotive problems
- Use sensor fusion to perceive the environment
- Program Udacity’s real self-driving car
- Prerequisites: Experience with Python and C++
- Languages: English