NVIDIA MERLIN
NVIDIA MERLIN
NVIDIA Merlin is an open beta framework for building large-scale deep learning recommender systems.
Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges. Each stage of the Merlin pipeline is optimized to support hundreds of terabytes of data, all accessible through easy-to-use APIs. With Merlin, better predictions than traditional methods and increased click-through rates are within reach.
Merlin ETL
NVTabular
Merlin NVTabular is a feature engineering and preprocessing library designed to effectively manipulate terabytes of recommender system datasets and significantly reduce data preparation time. It provides efficient feature transformations, preprocessing, and high level abstraction that accelerates computation on GPUs using the RAPIDS cuDF library.
Merlin Training
HugeCTR
Merlin HugeCTR is a deep neural network training framework designed for recommender systems. It provides distributed training with model-parallel embedding tables and data-parallel neural networks across multiple GPUs and nodes for maximum performance. HugeCTR covers common and recent architectures such as Deep Learning Recommendation Model (DLRM), Wide and Deep, Deep Cross Network, and DeepFM.
Merlin Inference
TensorRT and Triton
Merlin Inference accelerates production inference on GPUs for feature transforms and neural network execution. Run inference efficiently on GPUs by maximizing throughput with the right combination of latency and GPU utilization. Take advantage of Merlin Inference with NVIDIA Triton™ Inference Server and NVIDIA® TensorRT™.
Merlin Reference
Applications
Get started with open source reference implementations and achieve state-of-the-art accuracy on public datasets with up to 10x the acceleration.
An End-to-End System Architecture
NVIDIA Merlin accelerates the entire pipeline from ingesting and training to deploying GPU-accelerated recommender systems. Models and tools simplify building and deploying a production-quality pipeline. We invite you to share some information about your recommender pipeline in this survey to influence the Merlin Roadmap.
Figure 1: NVIDIA Merlin Open Beta Recommender System Framework
NVTabular reduces data preparation time by GPU-accelerating feature transformations and preprocessing.
HugeCTR is a deep neural network training framework that is capable of distributed training across multiple GPUs and nodes for maximum performance.
NVIDIA Triton™ Inference Server and NVIDIA® TensorRT™ accelerate production inference on GPUs for feature transforms and neural network execution.
Recommender-Specific APIs
Features APIs built specifically for managing the massive tabular datasets and model architectures used in recommender systems.
Robust Scalable Performance
Specifically designed for 100+ terabyte recommender datasets and terabyte embedding tables with 10X the inference performance of other approaches.
State-of-the-Art Models
Supports state-of-the-art hybrid models such as Wide and Deep, Neural Collaborative Filtering (NCF), Variational Autoencoder (VAE), Deep Cross Network, DeepFM, and xDeepFM.
Resources
- Introduction to NVIDIA Merlin (Blog)
- Introduction to HugeCTR (Recorded Talk)
- Accelerated Wide and Deep Pipeline (Blog)
- Building Intelligent Recommender Systems (Deep Learning Workshop)
- GTC 2020 Keynote Link