Technical Walkthrough

Training Your JetBot in NVIDIA Isaac Sim

Discuss (12)

How do you teach your JetBot new tricks? In this post, we highlight NVIDIA Isaac Sim simulation and training capabilities by walking you through how to train the JetBot in Isaac Sim with reinforcement learning (RL) and test this trained RL model on NVIDIA Jetson Nano with the real JetBot.

Photo shows real JetBot next to photo-realistic simulation of JetBot in Isaac Sim
Figure 1. Which one is the real JetBot?

Goal

The goal is to train a deep neural network agent in Isaac Sim and transfer it to the real JetBot to follow a road. Training this network on the real JetBot would require frequent human attention. Isaac Sim can simulate the mechanics of the JetBot and camera sensor and automate setting and resetting the JetBot. The simulation also gives you access to ground truth data and the ability to randomize the environment the agent learns on, which helps make the network robust enough to drive the real JetBot.

Training JetBot in Isaac Sim

Photo shows the simulation JetBot on top of a road in Isaac Sim.
Figure 2. Training JetBot to follow the road in Isaac Sim.

First, download Isaac Sim. Running Isaac Sim requires the following resources:

  • Ubuntu 18.04
  • NVIDIA GPU (RTX 2070 or higher)
  • NVIDIA GPU Driver (minimum version 450.57)

Next, apply this Isaac Sim patch.

For more information about how to train the RL JetBot sample in Isaac Sim,  see Reinforcement Training Samples.

Isaac Sim can simulate the JetBot driving around and randomize the environment, lighting, backgrounds, and object poses to increase the robustness of the agent. Figure 3 shows what this looks like during training:

GIF shows Isaac Sim’s domain randomization tools to create more lighting conditions, shadows, distractors and world scenarios to train your robot.
Figure 3. Domain-randomized lighting, distractors, and road curves to create variety and increase network robustness.

After being trained, JetBot can autonomously drive around the road in Isaac Sim.

GIF shows JetBot following the road.
Figure 4. RL training results in Isaac Sim.

Here’s how you can test this trained RL model on the real JetBot.

GIF shows real JetBot following the track.
Figure 5. A trained RL network in Isaac Sim is transferred and run on the real JetBot. The brain of the real JetBot is NVIDIA Jetson Nano.

Figure 6 shows what the real JetBot is seeing and thinking.

GIF from JetBot camera. The images are pushed through the trained RL network which produces driving commands, shown in the slider
Figure 6. JetBot camera point of view and steering intentions.

Running the trained RL model on the real JetBot

To build a JetBot, you need the following hardware components:

For more information about supported components, see Networking.

Assemble the JetBot according to the instructions. On the Waveshare Jetbot, removing the front fourth wheel may help it get stuck less. Also, the 2GB Jetson Nano may not come with a fan connector.

Flash your JetBot with the following instructions:

Put the microSD card in the Jetson Nano board. Plug in a keyboard, mouse, and HDMI cable to the board with the 12.6V adapter. Boot up and follow the onscreen instructions to set up the JetBot user.

Update the package list:

$ sudo apt-get update

If you are using the 2GB Jetson Nano, you also need to run the following command:

$ sudo apt-get dist-upgrade

After setting up the physical JetBot, clone the following JetBot fork:

$ git clone https://github.com/hailoclu/jetbot.git

Launch Docker with all the steps from the NVIDIA-AI-IOT/jetbot GitHub repo, then run the following commands:

./scripts/enable_swap.sh
./docker/camera/enable.sh

These must be run on the JetBot directly or through SSH, not from the Jupyter terminal window. If you see docker: invalid reference format, set your environment variables again by calling source configure.sh.

Running the following two commands from the Jupyter terminal window also allows you to connect to the JetBot using SSH:

apt install openssh-server
ssh jetbot@0.0.0.0

After Docker is launched with ./enable.sh $HOME, you can connect to the JetBot from your computer through a Jupyter notebook by navigating to the JetBot IP address on your browser, for example, http://192.168.0.185:8888. If the setup succeeded without error, the IP address of the JetBot should be displayed on the LED on the back of the robot.

Unplug the keyboard, mouse, and HDMI to set your JetBot free.

Install stable-baselines by pressing the plus (+) key in the Jupyter notebook to launch a terminal window and run the following two commands:

apt install python3-scipy python3-pandas python3-matplotlib
python3 -m pip install stable_baselines3==0.8.0
Screenshot of Jupyter terminal window with JetBot status.
Figure 8. Using Jupyter notebooks to communicate with the real JetBot.

Upload your trained RL model from the Isaac Sim best_model.zip file with the up-arrow button.

You can also download the trained model. Launch the  jetbot/notebooks/isaacsim_RL/isaacsim_deploying.ipynb notebook. Select each Jupyter cell and press Ctrl+Enter to execute it. The second cell for PPO.load(MODEL_PATH) might take a few minutes. [*] means the kernel is busy executing. When it’s done, it changes to a number.

Picture showing code that loads the trained RL on the real JetBot with Jupyter notebook.
Figure 9. Loading the trained RL model on the real JetBot with a Jupyter notebook. 

Running the camera code should turn on the JetBot camera.

Picture showing code that turns on the real camera with Jupyter notebook.
Figure 10. Turning on the real JetBot camera.

Executing this block of code lets the trained network run inference on the camera and issue driving commands based on what it’s seeing.

Picture showing code that runs inference on the JetBot camera and setting the drive commands.
Figure 11. Running inference on the real JetBot.

To interrupt the while loop, choose Stop. To stop the robot, run robot.stop.

Sim to real work

We specifically tailored the training environment to create an agent that can successfully transfer what it learned in simulation to the real JetBot. When we initially created the camera, we used default values for the FOV and simply angled it down at the road. This initial setup did not resemble the real camera image (Figure 12). We adjusted the FOV and orientation of the simulated camera (Figure 13) and added uniform random noise to the output during training. This was done to make the simulated camera view as much like the real camera view as possible.

The real JetBot camera has a fisheye lens while the initial simulation camera doesn’t.
Figure 12. Initial camera parameter mismatch. Left: Real; Right: Isaac Sim.
Side by side pictures showing real camera images matching the simulation camera.
Figure 13. Matching real and simulation camera parameters. Also changing the dashed lines color from yellow to white. Left: Real; Right: Isaac Sim.

We originally trained using the full RGB output from the simulated camera. However, we found that it took several hundred thousand updates to the network for it to start driving consistently. To shorten this, convert all images from RGB to grayscale. You should see the network start to display consistent turning behavior after about 100k updates or so. If you see the reward plateauing after a few hundred thousand updates, you can reduce the learning rate to help the network continue learning.

We also wanted to create an agent that didn’t require a specific setup to function. Differences in lighting, colors, shadows, and so on means that the domain your network encounters after being transferred to the real JetBot is quite large. You can’t simulate every possibility, so instead you teach the network to ignore variation in these things. You do this by periodically randomizing the track, lighting, and so on. You also spawn random meshes, known as distractors, to cast hard shadows on the track and help teach the network what to ignore. This process is known as domain randomization and it is a common technique in transfer learning.

Two images captured at the same road corner under different lighting conditions, distractors. This is to make the network more robust.
Figure 14. Two similar turns taken during the same training section. The changes in color, lighting, and texture during training forces the agent to learn to ignore those properties in the real world.

Conclusion

In this post, we demonstrated how you can use Isaac Sim to train an AI driver for a simulated JetBot and transfer the skill to a real one. You can evaluate how well a trained RL model performs on the real JetBot, then use Isaac Sim to address shortcomings. Solutions include randomizing more around failed cases, using domain randomization for lighting glares, camera calibration, and so on, and retraining and redeploying. There are more things you could try to improve the result further.

GIF showing the trained RL networking running on the 2GB Jetson Nano.
Figure 15. Running the trained RL model on the 2GB Jetson Nano.

Isaac Sim also provides RGB, depth, segmentation, and bounding box data. We encourage you to use this data in Isaac Sim to explore teaching your JetBot new tricks.

Image shows four views of JetBot in environment of pallets and guardrails, for navigation.
Figure 16. Four views of JetBot

Acknowledgements

Special thanks to the NVIDIA Isaac Sim team and Jetson team for contributing to this post, especially Hammad Mazhar, Renato Gasoto, Cameron Upright, Chitoku Yato and John Welsh.