AI Training
Deploying AI in real-world applications requires training networks to convergence at a specified accuracy. This is the best methodology to test whether AI systems are ready to be deployed in the field to deliver meaningful results.
Click here to view other performance data.
NVIDIA Performance on MLPerf 4.1 Training Benchmarks
NVIDIA Performance on MLPerf 4.1’s AI Benchmarks: Single Node, Closed Division
Framework | Network | Time to Train (mins) |
MLPerf Quality Target | GPU | Server | MLPerf-ID | Precision | Dataset | GPU Version |
---|---|---|---|---|---|---|---|---|---|
NVIDIA Nemo | LLama2-70B-lora | 12.9 | 0.925 cross entropy loss | 8x B200 | dgx_b200_preview | 4.1-0027 | Mixed | SCROLLs GovReport | NVIDIA HGX B200 |
24.1 | 0.925 cross entropy loss | 8x H200 | NVIDIA H200 | 4.1-0022 | Mixed | SCROLLs GovReport | H200-SXM5-141GB | ||
27.9 | 0.925 cross entropy loss | 8x H100 | Eos | 4.1-0002 | Mixed | SCROLLs GovReport | H100-SXM5-80GB | ||
NVIDIA DGL | R-GAT | 5.5 | 72.0 % classification | 8x B200 | dgx_b200_preview | 4.1-0025 | Mixed | IGBH-Full | NVIDIA HGX B200 |
7.7 | 72.0 % classification | 8x H200 | NVIDIA H200 | 4.1-0018 | Mixed | IGBH-Full | H200-SXM5-141GB | ||
11.2 | 72.0 % classification | 8x H100 | Eos | 4.1-0000 | Mixed | IGBH-Full | H100-SXM5-80GB | ||
NVIDIA Merlin HugeCTR | DLRM-dcnv2 | 2.4 | 0.80275 AUC | 8x B200 | dgx_b200_preview | 4.1-0026 | Mixed | Criteo 3.5TB Click Logs | NVIDIA HGX B200 |
3.5 | 0.80275 AUC | 8x H200 | NVIDIA H200 | 4.1-0019 | Mixed | Criteo 3.5TB Click Logs | H200-SXM5-141GB | ||
3.9 | 0.80275 AUC | 8x H100 | Eos | 4.1-0001 | Mixed | Criteo 3.5TB Click Logs | H100-SXM5-80GB | ||
NVIDIA Nemo | Stable Diffusion v2.0 | 19.5 | FID⇐90 and CLIP>=0.15 | 8x B200 | dgx_b200_preview | 4.1-0027 | Mixed | LAION-400M-filtered | NVIDIA HGX B200 |
30.5 | FID⇐90 and CLIP>=0.15 | 8x H200 | NVIDIA H200 | 4.1-0022 | Mixed | LAION-400M-filtered | H200-SXM5-141GB | ||
33.9 | FID⇐90 and CLIP>=0.15 | 8x H100 | Eos | 4.1-0002 | Mixed | LAION-400M-filtered | H100-SXM5-80GB | ||
PyTorch | BERT | 3.8 | 0.72 Mask-LM accuracy | 8x B200 | dgx_b200_preview | 4.1-0028 | Mixed | Wikipedia 2020/01/01 | NVIDIA HGX B200 |
5.2 | 0.72 Mask-LM accuracy | 8x H200 | NVIDIA H200 | 4.1-0020 | Mixed | Wikipedia 2020/01/01 | H200-SXM5-141GB | ||
5.5 | 0.72 Mask-LM accuracy | 8x H100 | Eos | 4.1-0004 | Mixed | Wikipedia 2020/01/01 | H100-SXM5-80GB | ||
PyTorch | RetinaNet | 22.5 | 34.0% mAP | 8x B200 | dgx_b200_preview | 4.1-0028 | Mixed | Subset of OpenImages | NVIDIA HGX B200 |
34.3 | 34.0% mAP | 8x H200 | NVIDIA H200 | 4.1-0021 | Mixed | Subset of OpenImages | H200-SXM5-141GB | ||
35.7 | 34.0% mAP | 8x H100 | Eos | 4.1-0003 | Mixed | Subset of OpenImages | H100-SXM5-80GB |
NVIDIA Performance on MLPerf 4.1’s AI Benchmarks: Multi Node, Closed Division
Framework | Network | Time to Train (mins) |
MLPerf Quality Target | GPU | Server | MLPerf-ID | Precision | Dataset | GPU Version |
---|---|---|---|---|---|---|---|---|---|
NVIDIA NeMo | GPT3 | 193.7 | 2.69 log perplexity | 64x B200 | dgx_b200_preview_n8 | 4.1-0029 | Mixed | c4/en/3.0.1 | NVIDIA HGX B200 |
96.7 | 2.69 log perplexity | 256x H100 | Eos_n32 | 4.1-0009 | Mixed | c4/en/3.0.1 | H100-SXM5-80GB | ||
49.8 | 2.69 log perplexity | 512x H100 | Eos_n64 | 4.1-0012 | Mixed | c4/en/3.0.1 | H100-SXM5-80GB | ||
3.4 | 2.69 log perplexity | 11,616x H100 | Eos-dfw_n1452 | 4.1-0024 | Mixed | c4/en/3.0.1 | H100-SXM5-80GB | ||
NVIDIA NeMo | LLama2-70B-lora | 4.6 | 0.925 cross entropy loss | 64x H100 | Eos_n8 | 4.1-0015 | Mixed | SCROLLs GovReport | H100-SXM5-80GB |
1.2 | 0.925 cross entropy loss | 1,024x H100 | Eos_n128 | 4.1-0006 | Mixed | SCROLLs GovReport | H100-SXM5-80GB | ||
DGL | R-GAT | 2.1 | 72.0 % classification | 64x H100 | Eos_n8 | 4.1-0013 | Mixed | IGBH-Full | H100-SXM5-80GB |
0.9 | 72.0 % classification | 512x H100 | Eos_n64 | 4.1-0011 | Mixed | IGBH-Full | H100-SXM5-80GB | ||
NVIDIA Merlin HugeCTR | DLRM-dcnv2 | 1.3 | 0.80275 AUC | 64x H100 | Eos_n8 | 4.1-0014 | Mixed | Criteo 3.5TB Click Logs | H100-SXM5-80GB |
1 | 0.80275 AUC | 128x H100 | Eos_n16 | 4.1-0007 | Mixed | Criteo 3.5TB Click Logs | H100-SXM5-80GB | ||
NVIDIA NeMo | Stable Diffusion v2.0 | 6.1 | FID⇐90 and CLIP>=0.15 | 64x H100 | Eos_n8 | 4.1-0015 | Mixed | LAION-400M-filtered | H100-SXM5-80GB |
1.7 | FID⇐90 and CLIP>=0.15 | 512x H100 | Eos_n64 | 4.1-0012 | Mixed | LAION-400M-filtered | H100-SXM5-80GB | ||
1.4 | FID⇐90 and CLIP>=0.15 | 1,024x H100 | Eos_n128 | 4.1-0005 | Mixed | LAION-400M-filtered | H100-SXM5-80GB | ||
PyTorch | BERT | 0.9 | 0.72 Mask-LM accuracy | 64x H100 | Eos_n8 | 4.1-0016 | Mixed | Wikipedia 2020/01/01 | H100-SXM5-80GB |
0.1 | 0.72 Mask-LM accuracy | 3,472x H100 | Eos_n434 | 4.1-0010 | Mixed | Wikipedia 2020/01/01 | H100-SXM5-80GB | ||
PyTorch | RetinaNet | 6 | 34.0% mAP | 64x H100 | Eos_n8 | 4.1-0017 | Mixed | Subset of OpenImages | H100-SXM5-80GB |
0.8 | 34.0% mAP | 2,528x H100 | Eos_n316 | 4.1-0008 | Mixed | Subset of OpenImages | H100-SXM5-80GB |
MLPerf™ v4.1 Training Closed: 4.1-0000, 4.1-0001, 4.1-0002, 4.1-0003, 4.1-0004, 4.1-0005, 4.1-0006, 4.1-0007, 4.1-0008, 4.1-0009, 4.1-0010, 4.1-0011, 4.1-0012, 4.1-0013, 4.1-0014, 4.1-0015, 4.1-0016, 4.1-0017, 4.1-0018, 4.1-0019, 4.1-0020, 4.1-0021, 4.1-0022, 4.1-0024, 4.1-0025, 4.1-0026, 4.1-0027, 4.1-0028, 4.1-0029 | MLPerf name and logo are trademarks. See https://mlcommons.org/ for more information.
For Training rules and guidelines, click here
B200 results are preview submissions
NVIDIA Performance on MLPerf 3.0’s Training HPC Benchmarks: Closed Division
Framework | Network | Time to Train (mins) |
MLPerf Quality Target | GPU | Server | MLPerf-ID | Precision | Dataset | GPU Version |
---|---|---|---|---|---|---|---|---|---|
PyTorch | CosmoFlow | 2.1 | Mean average error 0.124 | 512x H100 | eos | 3.0-8006 | Mixed | CosmoFlow N-body cosmological simulation data with 4 cosmological parameter targets | H100-SXM5-80GB |
DeepCAM | 0.8 | IOU 0.82 | 2,048x H100 | eos | 3.0-8007 | Mixed | CAM5+TECA climate simulation with 3 target classes (atmospheric river, tropical cyclone, background) | H100-SXM5-80GB | |
OpenCatalyst | 10.7 | Forces mean absolute error 0.036 | 640x H100 | eos | 3.0-8008 | Mixed | Open Catalyst 2020 (OC20) S2EF 2M training split, ID validation set | H100-SXM5-80GB | |
OpenFold | 7.5 | Local Distance Difference Test (lDDT-Cα) >= 0.8 | 2,080x H100 | eos | 3.0-8009 | Mixed | OpenProteinSet and Protein Data Bank | H100-SXM5-80GB |
MLPerf™ v3.0 Training HPC Closed: 3.0-8006, 3.0-8007, 3.0-8008, 3.0-8009 | MLPerf name and logo are trademarks. See https://mlcommons.org/ for more information.
For MLPerf™ v3.0 Training HPC rules and guidelines, click here
LLM Training Performance on NVIDIA Data Center Products
H100 Training Performance
Framework | Model | Time to Train (days) | Throughput per GPU | GPU | Server | Container Version | Sequence Length | TP | PP | CP | Precision | Global Batch Size | GPU Version |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Nemo | Llama3.1 405B | 36 | 314 tokens/sec | 576x H100 | Eos | nemo:24.09 | 8192 | 8 | 9 | 2 | FP8 | 252 | H100 SXM5 80GB |
Llama3 8B | 0.8 | 13,443 tokens/sec | 8x H100 | Eos | nemo:24.09 | 8192 | 1 | 1 | 2 | FP8 | 128 | H100 SXM5 80GB | |
Llama3 70B | 7.3 | 1,557 tokens/sec | 64x H100 | Eos | nemo:24.09 | 8192 | 4 | 4 | 2 | FP8 | 128 | H100 SXM5 80GB | |
Nemotron 8B | 0.9 | 12,701 tokens/sec | 64x H100 | Eos | nemo:24.09 | 4096 | 2 | 1 | 1 | FP8 | 256 | H100 SXM5-80GB | |
Nemotron 15B | 1.5 | 7,516 tokens/sec | 64x H100 | Eos | nemo:24.09 | 4096 | 4 | 1 | 1 | FP8 | 256 | H100 SXM5 80GB | |
Nemotron 22B | 2.3 | 4,980 tokens/sec | 64x H100 | Eos | nemo:24.09 | 4096 | 2 | 4 | 1 | FP8 | 256 | H100 SXM5 80GB | |
Nemotron 340B | 32.7 | 346 tokens/sec | 128x H100 | Eos | nemo:24.09 | 4096 | 8 | 8 | 1 | FP8 | 32 | H100 SXM5 80GB |
TP: Tensor Parallelism
PP: Pipeline Parallelism
CP: Context Parallelism
Time to Train is estimated time to train on 1T tokens with 1K GPUs
Converged Training Performance on NVIDIA Data Center GPUs
H200 Training Performance
Framework | Framework Version | Network | Time to Train (mins) |
Accuracy | Throughput | GPU | Server | Container | Precision | Batch Size | Dataset | GPU Version |
---|---|---|---|---|---|---|---|---|---|---|---|---|
PyTorch | 2.4.0a0 | Tacotron2 | 65 | .56 Training Loss | 496,465 total output mels/sec | 8x H200 | DGX H200 | 24.12-py3 | TF32 | 128 | LJSpeech 1.1 | NVIDIA H200 |
2.4.0a0 | WaveGlow | 106 | -5.7 Training Loss | 4,124,433 output samples/sec | 8x H200 | DGX H200 | 24.12-py3 | Mixed | 10 | LJSpeech 1.1 | NVIDIA H200 | |
2.4.0a0 | NCF | .96 Hit Rate at 10 | 252,318,096 samples/sec | 8x H200 | DGX H200 | 24.12-py3 | Mixed | 131072 | MovieLens 20M | NVIDIA H200 | ||
2.4.0a0 | FastPitch | 66 | .17 Training Loss | 1,465,568 frames/sec | 8x H200 | DGX H200 | 24.12-py3 | TF32 | 32 | LJSpeech 1.1 | NVIDIA H200 | |
2.4.0a0 | Transformer XL Large | 264 | 17.82 Perplexity | 317,663 total tokens/sec | 8x H200 | DGX H200 | 24.12-py3 | Mixed | 16 | WikiText-103 | NVIDIA H200 | |
2.4.0a0 | Transformer XL Base | 116 | 21.6 Perplexity | 1,163,450 total tokens/sec | 8x H200 | DGX H200 | 24.12-py3 | Mixed | 128 | WikiText-103 | NVIDIA H200 | |
2.4.0a0 | EfficientDet-D0 | 303 | .33 BBOX mAP | 2,793 images/sec | 8x H200 | DGX H200 | 24.12-py3 | Mixed | 150 | COCO 2017 | NVIDIA H200 | |
2.4.0a0 | HiFiGAN | 915 | 9.75 Training Loss | 120,606 total output mels/sec | 8x H200 | DGX H200 | 24.12-py3 | Mixed | 16 | LJSpeech-1.1 | NVIDIA H200 |
H100 Training Performance
Framework | Framework Version | Network | Time to Train (mins) |
Accuracy | Throughput | GPU | Server | Container | Precision | Batch Size | Dataset | GPU Version |
---|---|---|---|---|---|---|---|---|---|---|---|---|
PyTorch | 2.4.0a0 | Tacotron2 | . Training Loss | 477,113 total output mels/sec | 8x H100 | DGX H100 | 24.12-py3 | Mixed | 128 | LJSpeech 1.1 | H100-SXM5-80GB | |
2.4.0a0 | WaveGlow | . Training Loss | 3,809,464 output samples/sec | 8x H100 | DGX H100 | 24.12-py3 | Mixed | 10 | LJSpeech 1.1 | H100-SXM5-80GB | ||
2.4.0a0 | NCF | . Hit Rate at 10 | 212,174,107 samples/sec | 8x H100 | DGX H100 | 24.12-py3 | TF32 | 131072 | MovieLens 20M | H100-SXM5-80GB | ||
2.4.0a0 | FastPitch | . Training Loss | 1,431,758 frames/sec | 8x H100 | DGX H100 | 24.12-py3 | TF32 | 32 | LJSpeech 1.1 | H100-SXM5-80GB |
A30 Training Performance
Framework | Framework Version | Network | Time to Train (mins) |
Accuracy | Throughput | GPU | Server | Container | Precision | Batch Size | Dataset | GPU Version |
---|---|---|---|---|---|---|---|---|---|---|---|---|
PyTorch | 2.4.0a0 | Tacotron2 | 129 | .53 Training Loss | 237,526 total output mels/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 104 | LJSpeech 1.1 | NVIDIA A30 |
2.4.0a0 | WaveGlow | 402 | -5.88 Training Loss | 1,047,359 output samples/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 10 | LJSpeech 1.1 | NVIDIA A30 | |
2.4.0a0 | GNMT v2 | 49 | 24.23 BLEU Score | 306,590 total tokens/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 128 | wmt16-en-de | NVIDIA A30 | |
2.4.0a0 | NCF | 1 | .96 Hit Rate at 10 | 41,902,951 samples/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 131072 | MovieLens 20M | NVIDIA A30 | |
2.4.0a0 | FastPitch | 153 | .17 Training Loss | 547,338 frames/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 16 | LJSpeech 1.1 | NVIDIA A30 | |
2.4.0a0 | Transformer XL Base | 196 | 22.82 Perplexity | 168,548 total tokens/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 32 | WikiText-103 | NVIDIA A30 | |
2.4.0a0 | EfficientNet-B0 | 785 | 77.15 Top 1 | 11,335 images/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 128 | Imagenet2012 | NVIDIA A30 | |
2.4.0a0 | EfficientNet-WideSE-B0 | 800 | 77.08 Top 1 | 11,029 images/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 128 | Imagenet2012 | NVIDIA A30 | |
2.4.0a0 | MoFlow | 99 | 86.8 NUV | 12,284 molecules/sec | 8x A30 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 512 | ZINC | A30 |
A10 Training Performance
Framework | Framework Version | Network | Time to Train (mins) |
Accuracy | Throughput | GPU | Server | Container | Precision | Batch Size | Dataset | GPU Version |
---|---|---|---|---|---|---|---|---|---|---|---|---|
PyTorch | 2.4.0a0 | Tacotron2 | 145 | .53 Training Loss | 210,315 total output mels/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 104 | LJSpeech 1.1 | NVIDIA A10 |
2.4.0a0 | WaveGlow | 543 | -5.8 Training Loss | 776,028 output samples/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 10 | LJSpeech 1.1 | NVIDIA A10 | |
2.4.0a0 | GNMT v2 | 57 | 24.29 BLEU Score | 262,936 total tokens/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 128 | wmt16-en-de | NVIDIA A10 | |
2.4.0a0 | NCF | 2 | .96 Hit Rate at 10 | 33,005,044 samples/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | TF32 | 131072 | MovieLens 20M | NVIDIA A10 | |
2.4.0a0 | FastPitch | 180 | .17 Training Loss | 462,052 frames/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 16 | LJSpeech 1.1 | NVIDIA A10 | |
2.4.0a0 | Transformer XL Base | 262 | 22.82 Perplexity | 126,073 total tokens/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 32 | WikiText-103 | NVIDIA A10 | |
2.4.0a0 | EfficientNet-B0 | 1,035 | 77.06 Top 1 | 8,508 images/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 128 | Imagenet2012 | NVIDIA A10 | |
2.4.0a0 | EfficientNet-WideSE-B0 | 1,061 | 77.23 Top 1 | 8,301 images/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 128 | Imagenet2012 | NVIDIA A10 | |
2.4.0a0 | MoFlow | 100 | 88.14 NUV | 12,237 images/sec | 8x A10 | GIGABYTE G482-Z52-00 | 24.09-py3 | Mixed | 512 | Medical Segmentation Decathlon | NVIDIA A10 |
View More Performance Data
AI Inference
Real-world inferencing demands high throughput and low latencies with maximum efficiency across use cases. An industry-leading solution lets customers quickly deploy AI models into real-world production with the highest performance from data center to edge.
Learn MoreAI Pipeline
NVIDIA Riva is an application framework for multimodal conversational AI services that deliver real-performance on GPUs.
Learn More