Developers from the California-based non-profit OpenAI announced today they trained a deep learning system that can grasp and manipulate real-world objects with remarkable dexterity.
“While dexterous manipulation of objects is a fundamental everyday task for humans, it is still challenging for autonomous robots,” the developers stated in their research paper.
In the work, the team demonstrates how they taught a deep learning system to learn human grasps such as the tripod, prismatic, and tip inch. The neural network also learned actions and behaviors, such as finger pivoting, finger gaiting, sliding, multi-finger coordination, the controlled use of gravity, and coordinated application of translational and torsional forces.
Using NVIDIA Tesla V100 GPUs, and the cuDNN-accelerated TensorFlow deep learning framework, the team trained their neural networks on about one hundred years of experience, which was achieved in about 50 hours, the developers said.
“Our system, called Dactyl, is trained entirely in simulation and transfers its knowledge to reality, adapting to real-world physics using techniques we’ve been working on for the past year,” the OpenAI team wrote in a blog post.
Once fully trained, the neural network performed 50 successful consecutive rotations in a row without dropping a cube.
“Our results demonstrate that, contrary to a common belief, contemporary deep [reinforcement learning] RL algorithms can be applied to solving complex real-world robotics problems which are beyond the reach of existing non-learning-based approaches.”
For inference, the team used an NVIDIA GPU and TensorFlow. “Every 80ms it queries the phasespace sensors and then runs inference with the neural network to obtain the action, which takes roughly 25ms. The policy outputs an action that specifies the change of position for each actuator, relative to the current position of the joints controlled by the actuator. It then sends the action to the low-level controller,” the developers said.
The developers concede their method isn’t perfect, but they are hopeful their approach can solve more complex tasks in the future. “We can go even further beyond what today’s hand programmed robots can do,” said Alex Ray, a machine learning engineer at OpenAI.
Read more>
Related resources
- DLI course: Deep Learning for Industrial Inspection
- GTC session: Empowering Collaborative Robots: The Future of AI Vision With Digital Twins
- GTC session: Robotics and the Role of AI: Past, Present, and Future
- GTC session: Deploying AI in Real-World Robots
- SDK: Isaac Lab
- SDK: Isaac Manipulator