After clicking “Watch Now” you will be prompted to login or join.
Accelerating Recommender System Training and Inference on NVIDIA GPUs
Even Oldridge, NVIDIA | Joey Conway, NVIDIA | Alvaro Garcia, NVIDIA | Nico Koumchatzky, NVIDIA | Alec Gunny & Akshay Subramaniam, NVIDIA | Onur Yilmaz & Chirayu Garg, NVIDIA | Lukasz Mazurek & Scott LeGrand, NVIDIA | Paulius Micikevicius & Levs Dolgovs, NVIDIA
GTC 2020
Come and learn about how you can use NVIDIA technologies to accelerate your recommender system training and inference pipelines. We've been doing some ground-breaking work on optimizing performance for many stages of recommender system, including ETL of tabular data, training with terabyte-size embeddings for CTR models on multiple nodes, low-latency inference for Wide & Deep, and more. Running on NVIDIA GPUs, many of these are more than an order of magnitude faster than conventional CPU implementations. We'd be thrilled to learn from you how these accelerated components may apply to your setup and, if not, what's missing. We'd also like to hear the roles recommenders play in your products, the types of systems you're building, and the challenges you face. This session is ideal for data scientists and engineers who are responsible for developing, deploying, and scaling their recommender pipelines. Please join us for what's sure to be an interesting series of discussions.