The NVIDIA NGC catalog is a hub for GPU-optimized deep learning, machine learning, and HPC applications. With highly performant software containers, pretrained models, industry-specific SDKs, and Helm Charts the content available on the catalog helps simplify and accelerate end-to-end workflows.
A few additions and software updates to the NGC catalog include:
NVIDIA NeMo (Neural Modules) is an open source toolkit for conversational AI. It is designed for data scientists and researchers to build new state-of-the-art speech and NLP networks easily through API compatible building blocks that can be connected.
The latest version of NeMo adds support for Conformer ONNX conversion and streaming inference of long AU files, and improves performance of speaker clustering, verification, and diarization. Furthermore, it adds multiple datasets, right to left models, noisy channel reranking, ensembling for NMT. It also improves NMT training efficiency and adds tutorial notebooks for NMT data cleaning and preprocessing.
NVIDIA HPC SDK
The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and tools essential to maximizing developer productivity, performance, and portability of HPC applications.
The latest version includes full support for the NVIDIA Arm HPC Developer Kit and CUDA 11.4. It also offers HPC compilers with Arm-specific performance enhancements, including improved vectorization and optimized math functions.
NVIDIA Data Center Infrastructure-on-a-Chip Architecture (NVIDIA DOCA)
The NVIDIA DOCA SDK enables developers to rapidly create applications and services on top of BlueField data processing units (DPUs).
The NVIDIA DOCA container and resource helps deploy NVIDIA DOCA applications and development setups on the BlueField DPU. The deployment is based on Kubernetes and this resource bundles ready-to-use .yaml configuration files required for the different DOCA containers.
NVIDIA System Management (NVSM)
NVSM is a software framework for monitoring DGX nodes in a data center and provides active health monitoring, system alerts, and log generation. NVSM provides DGX Stations the health of the system and diagnostic information.
Deep Learning Software
Our most popular deep learning frameworks for training and inference are updated monthly. Pull the latest version (v21.07) of:
PyTorch Lightning is a lightweight framework for training models at scale, on multi-GPU, multi-node configurations. It does so without changing your code, and turns on advanced training optimizations with a switch of a flag.
The v1.4.0 adds support for Fully Sharded Parallelism, and fits much larger models onto multiple GPUs into memory, reaching over 40 billion parameters on an A100.
Additionally, it supports the new DeepSpeed Infinity plug-in and new cluster environments including KubeflowEnvironment and LSFEnvironment.
See the entire list of new v1.4.0 features >>
The NGC team is hosting a webinar with live Q&A to dive into how to build AI models using PyTorch Lightning, an AI framework built on top of PyTorch, from the NGC catalog.
Simplify and Accelerate AI Model Development with PyTorch Lightning, NGC, and AWS
September 2 at 10 a.m. PT
Register now >>
NVIDIA Magnum IO Developer Environment
NVIDIA Magnum IO is the collection of I/O technologies that make up the I/O subsystem of the modern data center, and enable applications at scale.
The Magnum IO Developer Environment container serves two primary purposes:
- Allows developers to begin scaling applications on a laptop, desktop, workstation, or in the cloud.
- Serve as the basis for a build container locally or in a CI/CD system.
Visit the NGC catalog to see how the GPU-optimized software can help simplify workflows and speed up solutions times.