Researchers from University of California, Berkeley and Siemens designed a robot that learns how to grip new objects just by studying a database of 3D shapes.
Using a GTX 1080 GPU and cuDNN with the TensorFlow deep learning framework, the team generated 6.7 million synthetic point clouds from thousands of 3D models to train their convolutional neural network to recognize robust grasps. Once trained, they evaluated the robot with test objects not included in the training set and it was successful at lifting the object 99% of the time. This is a significant step up from their previous methods (which used analytic and statistical sampling), the researchers say.
The work shows how new approaches to robot learning, combined with the ability for robots to access information through the cloud, could advance the capabilities of robots in factories and warehouses, and might even enable these machines to do useful work in new settings like hospitals and homes.
Read more >
Robot Uses Deep Learning to Grasp Awkward and Unusual Objects
May 24, 2017
Discuss (0)
Related resources
- DLI course: Building Video AI Applications at the Edge on Jetson Nano
- GTC session: Reward Fine-Tuning for Faster and More Accurate Unsupervised Object Discovery
- GTC session: A Smart Robot Born in the Industrial Metaverse, Enabled by OpenUSD
- GTC session: Deploying AI in Real-World Robots
- NGC Containers: MATLAB
- Webinar: Isaac Developer Meetup #2 - Build AI-Powered Robots with NVIDIA Isaac Replicator and NVIDIA TAO