University of Waterloo researchers are using deep learning and computer vision to develop autonomous exoskeleton legs to help users walk, climb stairs, and avoid obstacles.
The ExoNet project, described in an early-access paper on “Frontiers in Robotics and AI“, fits users with wearable cameras. AI software processes the camera’s video stream, and is being trained to recognize surrounding features such as stairs and doorways, and then determine the best movements to take.
“Our control approach wouldn’t necessarily require human thought,” said Brokoslaw Laschowski, Ph.D. candidate in systems design engineering and lead author on the ExoNet project. “Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves.”
People who rely on exoskeletons for mobility typically operate the devices using smartphone apps or joysticks.
“That can be inconvenient and cognitively demanding,” said Laschowski, who works with engineering professor John McPhee, the Canada Research Chair in Biomechatronic System Dynamics. “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”
The researchers used an NVIDIA TITAN GPU for neural network training and real-time image classification of walking environments. They collected over 5.6 million images of human locomotion environments to create a database dubbed ExoNet — which was used to train the initial model, developed using the TensorFlow deep learning framework.
Still in development, the exoskeleton system must learn to operate on uneven terrain and avoid obstacles before becoming fully functional. To boost battery life, the team plans to use human motion to help charge the devices.
Their recent paper analyzed how the joint mechanical power from a person sitting down could regenerate electrical power usable to charge the robotic exoskeletons.
Read the University of Waterloo news release for more >>
The researchers’ latest paper is available here. The original paper, published in 2019 at the IEEE International Conference on Rehabilitation Robotics, was a finalist for a best paper award.