GTC 2020: Machine Learning on the Edge for 5G
After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
Machine Learning on the Edge for 5G
Alex Keller, NVIDIA | Nikolaus Binder, NVIDIA
Edge computing means processing information close to where it's generated. In modern mobile communication, reasons to do so include latency constraints, bandwidth reduction, and saving energy. We'll shed light on three machine learning approaches that are especially suitable for edge computing on modern GPUs and show their interrelations. First, we'll discuss and demonstrate the application of hardware ray tracing in mobile communication. Then, we'll review efficiency gains from neural networks trained sparse from scratch. Finally, we'll look at kernel-based learning methods with applications to channel estimation and beam forming. Combined on one platform, kernel-based methods for online learning, artificial neural networks for fast data understanding, reinforcement learning for prediction, and hardware-accelerated ray tracing are the basis of highly efficient and scalable solutions for edge computing on GPUs for 5G.