Our weekly roundup covers the most recent software updates, learning resources, events, and notable news.
GPU-Accelerated Asymmetric Numeral Systems with nvCOMP v2.2.0
The redesigned nvCOMP 2.2.0 interface provides a single nvcompManagerBase object that can do compression and decompression. Users can now decompress nvcomp-compressed files without knowing how they were compressed. The interface also can manage scratch space and split the input buffer into independent chunks for parallel processing.
- The redesigned, high-level interface enhances the user experience by storing metadata in the compressed buffer.
- All compressors are available through both low-level and high-level APIs.
- Proprietary entropy encoder based on Asymmetric Numeral Systems.
- GDeflate: Entropy-only variant
- Windows support
Download now: nvCOMP version 2.2.0
Learn to Deploy a Text Classification Model Using Riva (DLI)
This free, 30 minute, online course is self paced and includes a sample notebook from the NGC TAO Toolkit—Conversational AI collection, complete with a live GPU environment.
Learn more: Deploy a Text Classification Model Using Riva
Optimized Vehicle Routing (DLI)
In this free one-hour course, participants will work through a demonstration of a common vehicle routing optimization problem at their own pace. Upon completion, participants will be able to preprocess input data for use by NVIDIA ReOpt routing solver, and compose variants of the problem that reflect real-world business constraints.
Register online: Optimized Vehicle Routing
Fundamentals of Accelerated Computing with CUDA Python (DLI)
This Deep Learning Institute workshop teaches you the fundamental tools and techniques for running GPU-accelerated Python applications using CUDA GPUs and the Numba compiler. This workshop is being offered Feb, 23 from 9 am to 5 pm PT.
At the conclusion of the workshop, you’ll have an understanding of the fundamental tools and techniques for GPU-accelerated Python applications with CUDA and Numba, including:
- GPU-accelerate NumPy ufuncs with a few lines of code.
- Configure code parallelization using the CUDA thread hierarchy.
- Write custom CUDA device kernels for maximum performance and flexibility.
- Use memory coalescing and on-device shared memory to increase CUDA kernel bandwidth.
Learn How Metropolis Boosts Go-to-Market Efforts at a Developer Meetup
Join NVIDIA experts at developer meetups Feb. 16 and 17, and find out how the Metropolis program can grow your vision AI business and enhance go-to-market efforts.
- Metropolis Validation Labs optimize your applications and accelerate deployments.
- NVIDIA Fleet Command simplifies provisioning and management of edge deployments accelerating the time to scale from POC to production.
- NVIDIA Launchpad provides easy access to GPU instances for faster POCs and customer trial
Register online: How the NVIDIA Metropolis Program will Supercharge Your Business
A Flexible Solution for Every AI Inference Deployment
Dive into NVIDIA inference solutions, including open-source NVIDIA Triton Inference Server and NVIDIA TensorRT, with a webinar and live Q&A, Feb. 23 at 10 a.m. PT.
- To optimize, deploy, and scale AI models in production using Triton Inference Server and TensorRT.
- Triton streamlines inference serving across multiple frameworks, across different query types (real-time, batch, streaming), on CPUs and GPUs, and with a model analyzer for efficient deployment.
- To standardize workflows to optimize models using TensorRT and framework Integrations with PyTorch and TensorFlow.
- Real-world customers are benefitting from Triton and TensorRT.
Register online: A Flexible Solution for Every AI Inference Deployment