H100

Nov 08, 2023
Setting New Records at Data Center Scale Using NVIDIA H100 GPUs and NVIDIA Quantum-2 InfiniBand
Generative AI is rapidly transforming computing, unlocking new use cases and turbocharging existing ones. Large language models (LLMs), such as OpenAI’s GPT...
19 MIN READ

Sep 28, 2023
NVIDIA H100 System for HPC and Generative AI Sets Record for Financial Risk Calculations
Generative AI is taking the world by storm, from large language models (LLMs) to generative pretrained transformer (GPT) models to diffusion models. NVIDIA is...
7 MIN READ

Aug 22, 2023
Simplifying GPU Application Development with Heterogeneous Memory Management
Heterogeneous Memory Management (HMM) is a CUDA memory management feature that extends the simplicity and productivity of the CUDA Unified Memory programming...
16 MIN READ

Jun 05, 2023
CUDA 12.1 Supports Large Kernel Parameters
CUDA kernel function parameters are passed to the device through constant memory and have been limited to 4,096 bytes. CUDA 12.1 increases this parameter limit...
5 MIN READ

Mar 22, 2023
NVIDIA-Certified Next-Generation Computing Platforms for AI, Video, and Data Analytics Performance
The business applications of GPU-accelerated computing are set to expand greatly in the coming years. One of the fastest-growing trends is the use of generative...
7 MIN READ

Mar 30, 2022
Build Mainstream Servers for AI Training and 5G with the NVIDIA H100 CNX
There is an ongoing demand for servers with the ability to transfer data from the network to a GPU at ever faster speeds. As AI models keep getting bigger, the...
5 MIN READ

Mar 22, 2022
NVIDIA Hopper Architecture In-Depth
Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU...
36 MIN READ