Stanford researchers in the Computational Vision and Geometry Lab developed a robot that could soon autonomously move among us with normal human social etiquettes — such as deciding rights of way on the sidewalk.
Using a Tesla K40 GPU and CUDA to train the machine learning models, the robot is able to understand its surroundings and navigate through streets and hallways with humans, and, over time, learns the unwritten conventions of social behaviors.
“By learning social conventions, the robot can be part of ecosystems where humans and robots coexist,” said Silvio Savarese, an assistant professor of computer science and director of the Stanford Computational Vision and Geometry Lab.
The researchers estimate these types of robots will become available for only $500 in five to six years.
“It’s possible to make these robots affordable for on-campus delivery, or for aiding impaired people to navigate in a public space like a train station or for guiding people to find their way through an airport,” Savarese said.
Read more >
Stanford’s Social Robot ‘Jackrabbot’ Seeks to Understand Pedestrian Behavior
Jun 03, 2016
Discuss (0)

Related resources
- DLI course: Building Video AI Applications at the Edge on Jetson Nano
- GTC session: Connect with the Experts: Development and Simulation of Autonomous Robots (Spring 2023)
- GTC session: Isaac Sim: A Cloud-Enabled Simulation Toolbox for Robotics (Spring 2023)
- GTC session: Jetson Edge AI Developer Days: Design a Complex Architecture on NVIDIA Isaac ROS (Spring 2023)
- SDK: Isaac SDK
- SDK: Isaac ROS