Benchmark

Oct 09, 2025
NVIDIA Blackwell Leads on SemiAnalysis InferenceMAX™ v1 Benchmarks
SemiAnalysis recently launched InferenceMAX™ v1, a new open source initiative that provides a comprehensive methodology to evaluate inference hardware...
11 MIN READ

Oct 06, 2025
Accelerating Large-Scale Data Analytics with GPU-Native Velox and NVIDIA cuDF
As workloads scale and demand for faster data processing grows, GPU-accelerated databases and query engines have been shown to deliver significant...
7 MIN READ

Sep 10, 2025
Maximizing Low-Latency Networking Performance for Financial Services with NVIDIA Rivermax and NEIO FastSocket
Ultra-low latency and reliable packet delivery are critical requirements for modern applications in sectors such as the financial services industry (FSI), cloud...
10 MIN READ

Sep 09, 2025
NVIDIA Blackwell Ultra Sets New Inference Records in MLPerf Debut
As large language models (LLMs) grow larger, they get smarter, with open models from leading developers now featuring hundreds of billions of parameters. At the...
10 MIN READ

Sep 02, 2025
Cut Model Deployment Costs While Keeping Performance With GPU Memory Swap
Deploying large language models (LLMs) at scale presents a dual challenge: ensuring fast responsiveness during high demand, while managing the costs of GPUs....
6 MIN READ

Aug 29, 2025
Fine-Tuning gpt-oss for Accuracy and Performance with Quantization Aware Training
Major open-source foundational model releases are an exciting time for the AI community, bringing unique architectural innovations and capabilities. As the...
7 MIN READ

Aug 05, 2025
NVIDIA Accelerates OpenAI gpt-oss Models Delivering 1.5 M TPS Inference on NVIDIA GB200 NVL72
NVIDIA and OpenAI began pushing the boundaries of AI with the launch of NVIDIA DGX back in 2016. The collaborative AI innovation continues with the OpenAI...
6 MIN READ

Jul 18, 2025
Optimizing for Low-Latency Communication in Inference Workloads with JAX and XLA
Running inference with large language models (LLMs) in production requires meeting stringent latency constraints. A critical stage in the process is LLM decode,...
6 MIN READ

Jul 07, 2025
Think Smart and Ask an Encyclopedia-Sized Question: Multi-Million Token Real-Time Inference for 32X More Users
Modern AI applications increasingly rely on models that combine huge parameter counts with multi-million-token context windows. Whether it is AI agents...
8 MIN READ

Jun 12, 2025
Run High-Performance AI Applications with NVIDIA TensorRT for RTX
NVIDIA TensorRT for RTX is now available for download as an SDK that can be integrated into C++ and Python applications for both Windows and Linux. At...
7 MIN READ

Jun 04, 2025
Reproducing NVIDIA MLPerf v5.0 Training Scores for LLM Benchmarks
The previous post, NVIDIA Blackwell Delivers up to 2.6x Higher Performance in MLPerf Training v5.0, explains how the NVIDIA platform delivered the fastest time...
11 MIN READ

Jun 04, 2025
NVIDIA Blackwell Delivers up to 2.6x Higher Performance in MLPerf Training v5.0
The journey to create a state-of-the-art large language model (LLM) begins with a process called pretraining. Pretraining a state-of-the-art model is...
12 MIN READ

Jun 03, 2025
New NVIDIA Llama Nemotron Nano Vision Language Model Tops OCR Benchmark for Accuracy
Documents such as PDFs, graphs, charts, and dashboards are rich sources of data that, when extracted and organized, provide informative decision-making...
7 MIN READ

May 22, 2025
Blackwell Breaks the 1,000 TPS/User Barrier With Meta’s Llama 4 Maverick
NVIDIA has achieved a world-record large language model (LLM) inference speed. A single NVIDIA DGX B200 node with eight NVIDIA Blackwell GPUs can achieve over...
9 MIN READ

May 18, 2025
Announcing NVIDIA Exemplar Clouds for Benchmarking AI Cloud Infrastructure
Developers and enterprises training large language models (LLMs) and deploying AI workloads in the cloud have long faced a fundamental challenge: it’s nearly...
4 MIN READ

May 06, 2025
LLM Inference Benchmarking Guide: NVIDIA GenAI-Perf and NIM
This is the second post in the LLM Benchmarking series, which shows how to use GenAI-Perf to benchmark the Meta Llama 3 model when deployed with NVIDIA NIM. ...
11 MIN READ