TensorRT

Sep 03, 2025
Accelerate Autonomous Vehicle Development with the NVIDIA DRIVE AGX Thor Developer Kit
Autonomous vehicle (AV) technology is rapidly evolving, fueled by ever-larger and more complex AI models deployed at the edge. Modern vehicles now require not...
8 MIN READ

Aug 29, 2025
Fine-Tuning gpt-oss for Accuracy and Performance with Quantization Aware Training
Major open-source foundational model releases are an exciting time for the AI community, bringing unique architectural innovations and capabilities. As the...
7 MIN READ

Aug 22, 2025
Inside NVIDIA Blackwell Ultra: The Chip Powering the AI Factory Era
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI...
14 MIN READ

Aug 22, 2025
NVIDIA Hardware Innovations and Open Source Contributions Are Shaping AI
Open source AI models such as Cosmos, DeepSeek, Gemma, GPT-OSS, Llama, Nemotron, Phi, Qwen, and many more are the foundation of AI innovation. These models are...
8 MIN READ

Aug 05, 2025
NVIDIA Accelerates OpenAI gpt-oss Models Delivering 1.5 M TPS Inference on NVIDIA GB200 NVL72
NVIDIA and OpenAI began pushing the boundaries of AI with the launch of NVIDIA DGX back in 2016. The collaborative AI innovation continues with the OpenAI...
6 MIN READ

Aug 01, 2025
Optimizing LLMs for Performance and Accuracy with Post-Training Quantization
Quantization is a core tool for developers aiming to improve inference performance with minimal overhead. It delivers significant gains in latency, throughput,...
14 MIN READ

Jul 24, 2025
Double PyTorch Inference Speed for Diffusion Models Using Torch-TensorRT
NVIDIA TensorRT is an AI inference library built to optimize machine learning models for deployment on NVIDIA GPUs. TensorRT targets dedicated hardware in...
8 MIN READ

Jul 07, 2025
LLM Inference Benchmarking: Performance Tuning with TensorRT-LLM
This is the third post in the large language model latency-throughput benchmarking series, which aims to instruct developers on how to benchmark LLM inference...
11 MIN READ

Jul 02, 2025
Optimizing FLUX.1 Kontext for Image Editing with Low-Precision Quantization
FLUX.1 Kontext, the recently released model from Black Forest Labs, is a fascinating addition to the repertoire of community image generation models. The open...
10 MIN READ

Jun 25, 2025
Check Out Sovereign AI in Practice Through an NVIDIA Webinar
Join NVIDIA experts and leading European model builders on July 8 for a webinar on building and deploying multilingual large language models.
1 MIN READ

Jun 25, 2025
How to Streamline Complex LLM Workflows Using NVIDIA NeMo-Skills
A typical recipe for improving LLMs involves multiple stages: synthetic data generation (SDG), model training through supervised fine-tuning (SFT) or...
10 MIN READ

Jun 24, 2025
Introducing NVFP4 for Efficient and Accurate Low-Precision Inference
To get the most out of AI, optimizations are critical. When developers think about optimizing AI models for inference, model compression techniques—such as...
11 MIN READ

Jun 12, 2025
Run High-Performance AI Applications with NVIDIA TensorRT for RTX
NVIDIA TensorRT for RTX is now available for download as an SDK that can be integrated into C++ and Python applications for both Windows and Linux. At...
7 MIN READ

May 30, 2025
NVIDIA Deep Learning Institute Offers Multilingual AI Training at GTC Paris
Large language models (LLMs) are capable of recognizing, summarizing, translating, predicting, and generating content. Yet even the most powerful LLMs face...
6 MIN READ

May 22, 2025
Blackwell Breaks the 1,000 TPS/User Barrier With Meta’s Llama 4 Maverick
NVIDIA has achieved a world-record large language model (LLM) inference speed. A single NVIDIA DGX B200 node with eight NVIDIA Blackwell GPUs can achieve over...
9 MIN READ

May 19, 2025
NVIDIA TensorRT for RTX Introduces an Optimized Inference AI Library on Windows 11
AI experiences are rapidly expanding on Windows in creativity, gaming, and productivity apps. There are various frameworks available to accelerate AI inference...
9 MIN READ