News

NVIDIA Isaac ROS Delivers AI Perception to ROS Developers

Discuss (0)
Image shows Carter V2 Mobile Robot with semantic Lidar in warehouse scene.

Perceiving and understanding the surrounding world is a critical challenge for autonomous robots.

In conjunction with ROS World 2021, NVIDIA announced its latest efforts to deliver performant perception technologies to the ROS developer community. These initiatives will accelerate product development, improve product performance, and ultimately simplify the task of incorporating cutting-edge computer vision and AI/ML functionality into ROS-based robotic applications.

Announcement Highlights:

  • Highest-performing real-time stereo odometry solution available as a ROS package 
  • All NVIDIA inference DNNs available on NGC available as a ROS package with examples for image segmentation and pose estimation
  • New synthetic data generation (SDG) workflow in NVIDIA Isaac Sim to create production-quality datasets at scale for Vision AI training
  • NVIDIA Isaac Sim on Omniverse with out-of-the box support for ROS goes GA with most developer-friendly release to date

NVIDIA Isaac ROS GEMs—Optimized Performance

Image consists of blocks representing software components in Isaac ROS stack
Figure 1. Software Block Diagram of NVIDIA Isaac ROS GEMs

Isaac ROS GEMs provide packages that include image processing and computer vision, including DNN-based algorithms that are highly optimized for NVIDIA GPUs and Jetson.

Highlighted GEM Stereo Visual Odometry—Best-in-Class Accuracy and Optimized Performance

As autonomous machines move around their environments, they must keep track of where they are. Visual odometry solves this problem by estimating where a camera is relative to its starting position. The Isaac ROS GEM for stereo visual odometry provides this powerful functionality to ROS developers.

This GEM offers the best accuracy for a real-time stereo camera visual odometry solution. Publicly available results based on the widely used KITTI database can be referenced here. In addition to being highly accurate, this GPU-accelerated package runs extremely fast. In fact, it is now possible to run SLAM on HD resolution (1280×720) in real-time (>60fps) on an NVIDIA Jetson AGX Xavier. 

Highlighted GEM DNN Inference—All NGC DNN Inference Models Available to ROS Developers

You can use any of NVIDIA’s numerous inference models available on NGC or even provide your own DNN with DNN Inference GEM, a set of ROS2 packages. Further tuning of pretrained models or optimizations of your own models can be done with the NVIDIA TAO Toolkit.

After optimization, TensorRT or Triton, NVIDIA’s inference server, deploy these packages. Optimal inference performance will be achieved with the nodes leveraging TensorRT, NVIDIA’s high-performance inference SDK. If TensorRT does not support the desired DNN model, then NVIDIA Triton should be used to deploy the model. 

The GEM includes native support for U-Net and DOPE. The U-Net package, based on TensorRT, can be used for generating semantic segmentation masks from images. And the DOPE package can be used for 3D pose estimation for all detected objects. 

This tool is the fastest way to incorporate performant AI inference into a ROS application.

 Image is a composite image from three different image-processing algorithms.
Figure 2. Composite Image from Three Isaac ROS GEMs—DNN (PeopleSemSegnet)/AprilTags/Disparity(Depth)

NVIDIA Isaac Sim GA Release

The GA release of Isaac Sim, which will be available in November 2021, will be the most developer-friendly release to date. With numerous improvements in the user interface, performance, and useful building blocks it will lead to better simulations, built much faster. In addition, an improved ROS bridge and more ROS samples will enhance the the your experience.

New For This Release (2021.2 release scheduled for November 2021)

  • Improved performance, reduced memory usage and startup times
  • Improved occupancy map generation, URDF importer
  • New environments: Large warehouse, office, hospital
  • New Python building blocks for interfacing with robots, objects, environments
  • Improved performance for ROS/ROS2 Bridge, depth point cloud, lidar point cloud
  • Sample updates
    • Multirobot navigation with ROS2
    • SDG with domain randomization in Jupyter
Video 1: Joint Control of Franka Using ROS MoveIT

New Synthetic Data Generation Workflow—Production Datasets from Isaac Sim

An autonomous robot requires large and diverse datasets to train the numerous AI models running its perception stack. Getting all of this training data from real world scenarios is cost-prohibitive, and in the corner cases, potentially dangerous. The new synthetic data workflow provided with Isaac Sim is designed to build production quality datasets that address the safety and quality concerns of autonomous robots.

The developer building the datasets controls the stochastic distribution of the objects in the scene, the scene itself, the lighting, and the synthetic sensors. The developer also has fine-grained control as well to ensure that important corner cases are included in the datasets. Finally, this workflow supports versioning and debugging information so that datasets can be exactly reproduced for auditing and safety purposes. 

Image consists of various images generated by synthetic sensors in simulation
Figure 3. Examples of Synthetic Data from the Sensors library in Isaac Sim

Getting Started

Learn how to get started with Isaac ROS.

Be sure to stop by NVIDIA’s virtual booth at ROS World 21, and watch our technical presentations on Isaac.

  • On Oct. 21 at 4pm CT, join Hammad Mazhar, NVIDIA Lead Simulation Engineer to learn how Isaac Sim can be used to simulate different ROS/ROS2-driven workflows
  • Join the Simulation Tools on ROS panel on Oct. 21 at 5:20pm CT where Liila Torabi, NVIDIA Isaac Senior Product Manager, and developers of some of the world’s largest robotics simulation products will discuss the past, present, and future of simulation.