[…] There has been a significant and growing interest in depth estimation from a single RGB image, due to the relatively low cost and size of monocular cameras. [We] explore learning-based monocular depth estimation, targeting real-time inference on embedded systems. We propose an efficient and lightweight encoder-decoder network architecture and apply network pruning to further reduce computational complexity and latency. We deploy our proposed network, FastDepth, on the Jetson TX2 platform, where it runs at 178fps on the GPU and at 27fps on the CPU, with active power consumption under 10W. FastDepth achieves close to state-of-the-art accuracy on the NYU Depth v2 dataset.
Bring your edge AI, computer vision or robotics ideas to life with a Jetson developer kit.
Explore Products