Posts by Brian Slechta
Generative AI / LLMs
Sep 05, 2024
Low Latency Inference Chapter 1: Up to 1.9x Higher Llama 3.1 Performance with Medusa on NVIDIA HGX H200 with NVLink Switch
As large language models (LLMs) continue to grow in size and complexity, multi-GPU compute is a must-have to deliver the low latency and high throughput that...
5 MIN READ
Generative AI / LLMs
Aug 12, 2024
NVIDIA NVLink and NVIDIA NVSwitch Supercharge Large Language Model Inference
Large language models (LLM) are getting larger, increasing the amount of compute required to process inference requests. To meet real-time latency requirements...
8 MIN READ
Data Center / Cloud
Jun 12, 2024
Demystifying AI Inference Deployments for Trillion Parameter Large Language Models
AI is transforming every industry, addressing grand human scientific challenges such as precision drug discovery and the development of autonomous vehicles, as...
14 MIN READ