DEVELOPER BLOG

Tag: Triton

AI / Deep Learning

Fast-Track Production AI with Pretrained Models and Transfer Learning Toolkit 3.0

NVIDIA announced new pre-trained models and general availability of Transfer Learning Toolkit (TLT) 3.0, a core component of NVIDIA's Train… 3 MIN READ
AI / Deep Learning

Getting the Most Out of NVIDIA T4 on AWS G4 Instances

Learn how to get the best natural language inference performance from AWS G4dn instance powered by NVIDIA T4 GPUs, and how to deploy BERT networks easily using… 14 MIN READ
Data Science

Enabling Predictive Maintenance Using Root Cause Analysis, NLP, and NVIDIA Morpheus

The RAPIDS CLX team collaborated with the NVIDIA Enterprise Experience (NVEX) team to test and run a proof-of-concept (POC) to evaluate this NLP-based solution. 6 MIN READ
AI / Deep Learning

MLOps Made Simple & Cost Effective with Google Kubernetes Engine and NVIDIA A100 Multi-Instance GPUs

Google Cloud and NVIDIA collaborated to make MLOps simple, powerful, and cost-effective by bringing together the solution elements to build… 5 MIN READ
AI / Deep Learning

Scaling Inference in High Energy Particle Physics at Fermilab Using NVIDIA Triton Inference Server

In a series of studies, physicists from Fermilab, CERN, and university groups explored how to accelerate their data processing using NVIDIA Triton Inference… 9 MIN READ
AI / Deep Learning

Extending NVIDIA Performance Leadership with MLPerf Inference 1.0 Results

In this post, we step through some of these optimizations, including the use of Triton Inference Server and the A100 Multi-Instance GPU (MIG) feature. 7 MIN READ