Note: This video may require joining the NVIDIA Developer Program or login

GTC Silicon Valley-2019 ID:S9281:Deploying AI on Jetson Xavier/DRIVE Xavier with TensorRT and MATLAB

Avinash Nehemiah(MathWorks),Jaya Shankar(MathWorks)
Learn how GPU Coder produces high-performance CUDA code automatically from a high-level algorithm description in MATLAB. Write your deep learning application with the expressive power of MATLAB, which allows you to describe not just the use of your trained deep learning model in inference mode, but also perform data-augmentation and post-processing of the results to create a complete deployment-ready application. With MATLAB running on your host machine, communicate and control peripheral devices on your Jetson Xavier and DRIVE Xavier platforms to bring in live data from sensors for visualization and analysis. GPU Coder can then generate optimized inference code for the whole application. The deep learning inference model is compiled down to TensorRT's inference engine, while the rest of the application logic is parallelized through creation of CUDA kernels and integrated with other CUDA optimized libraries like cuBLAS, cuFFT, etc. GPU Coder provides a clean, elegant solution to go from algorithm to application deployment, unleashing the performance of CUDA, TensorRT, and the Xavier device architecture.

View the slides (pdf)