Researchers from the University of California, Berkeley developed a reinforcement learning-based system that can automatically capture and mimic the motions it sees in YouTube videos.
“Data-driven methods have been a cornerstone of character animation for decades, with motion-capture being one of the most popular sources of motion data. Mocap data is a staple for kinematic methods, and is also widely used in physics-based character animation,” the Berkeley researchers stated in their paper.
Using NVIDIA GeForce GTX 1080 TI and TITAN Xp GPUs, with the cuDNN-accelerated TensorFlow deep learning framework, the team trained their reinforcement learning system on several datasets to estimate the pose of characters and extract the mocap data from different video clips.
Give it video clips, the algorithm estimates the pose and movement of an actor in each frame. In this case, the team trained their algorithm to perform more than 20 acrobatic moves like backflips, cartwheels, and even martial arts.
“The primary contribution of our paper is a system for learning character controllers from video clips that integrates pose estimation and reinforcement learning. To make this possible, we introduce a number of extensions to both the pose tracking system and the reinforcement learning algorithm,” the researchers stated in their paper.
The system can understand poses it sees on videos and single frame images to predict where an actor might go.
A paper describing the method was published on ArXiv this week.
Read more>
This Reinforcement Learning Algorithm Can Capture Motion and Recreate It
Oct 11, 2018
Discuss (0)
Related resources
- GTC session: Reward Fine-Tuning for Faster and More Accurate Unsupervised Object Discovery
- GTC session: Parkour and More: How Simulation-Based RL Helps to Push the Boundaries in Legged Locomotion
- GTC session: Sim-to-Real With Isaac Gym: Basics and Real-World Examples on Robotic Hands
- NGC Containers: Animation Graph Microservice
- SDK: VCR (Virtual Reality Capture and Replay)
- SDK: Isaac Lab