Developing an Autonomous Bot is a Walk in the Park

Each November for the last decade, Tsukuba City in Japan has run a 2,000-meter race unlike just about any other in the world. What’s unusual is not the terrain – spanning parks, city streets, even indoor shopping malls – or the speed, the pace is a leisurely stroll. It’s the participants: each is an autonomous robot.
The event, called the Tsukuba Challenge, gives robots the difficult task of navigating by themselves a real-world urban environment while identifying various targets – people, signs, signals, etc. – along the course.
This year, three teams built their robots with the NVIDIA Jetson embedded computing platform. Students and alumni from Utsunomiya University, Chiba University and Tsukuba University created bots that use deep learning to navigate the urban race course, identifying street signals, signs and other objects along the way, while avoiding obstacles such as pedestrians and cyclists.
From University Classes to Land Courses
Utsunomiya University used the Jetson TX1 for visual odometry, signal recognition and target detection. Visual odometry helps determine a robot’s position and orientation by analyzing camera images, without the use of external systems such as GPS. The university’s researchers feel these features are important for agricultural robots to work on uneven terrain and recognize crops.
“Traffic signals were changing and people were moving. Thanks to our Jetson, we were able to provide amazing training performance to add more target data for deep learning,” said Koichi Ozaki, Ph.D, Dept. of Mechanical and Intelligent Engineering, Graduate School of Engineering, Utsunomiya University.
Chiba University used the Jetson TX1’s visual computing power to analyze lidar (light detection and ranging) data. Lidar point cloud data is converted to two-dimensional imagery and then run through a neural network trained by NVIDIA DIGITS and NVIDIA GPUs to detect key objects in the frame such as cars, pedestrians and cyclists. The Chiba team moved to the Jetson TX1 because of its small size, light weight, durability (anti-shock), low power consumption and affordability, which are all key for academic projects.
“Instead of using Intel notebook computers, we were able to do everything on a Jetson TX1,” said Kazuya Okawa, Ph.D., Graduate School of Engineering, Mechanical Systems Science Course, Chiba University. “Jetson’s high performance ensures there are no issues with sensor data acquisition, self-localization estimations, obstacle avoidance and more.”
The alumni who formed the Tsukuba University team built a robot, called i-Cart Middle, , with both Jetson TX1 and dual TK1 modules, along with a Ricoh Theta camera. A Jetson TK1 with an AverMedia C353 HDMI capture board captures and converts fisheye camera images into rectified images. The other Jetson TK1 module used those processed images for human detection by a convolutional neural network.
The Jetson TX1 is then used to run a faster recurrent CNN to scan the images to detect traffic signal transitions, which are a primary mission of the Tsukuba Challenge.
“We used three Jetsons and were amazed at the power-saving features. Our robot was able to run for three hours with a mobile battery (12V, 50Ah),” said Shigeru Bando, team leader of i-Cart Middle, Ph.D. in Engineering.
Learn more about how NVIDIA Jetson is advancing robotics >

Discuss (0)