TensorRT-LLM
Sep 05, 2024
Low Latency Inference Chapter 1: Up to 1.9x Higher Llama 3.1 Performance with Medusa on NVIDIA HGX H200 with NVLink Switch
As large language models (LLMs) continue to grow in size and complexity, multi-GPU compute is a must-have to deliver the low latency and high throughput that...
5 MIN READ
Aug 28, 2024
Boosting Llama 3.1 405B Performance up to 1.44x with NVIDIA TensorRT Model Optimizer on NVIDIA H200 GPUs
The Llama 3.1 405B large language model (LLM), developed by Meta, is an open-source community model that delivers state-of-the-art performance and supports a...
7 MIN READ
Aug 28, 2024
NVIDIA Blackwell Platform Sets New LLM Inference Records in MLPerf Inference v4.1
Large language model (LLM) inference is a full-stack challenge. Powerful GPUs, high-bandwidth GPU-to-GPU interconnects, efficient acceleration libraries, and a...
13 MIN READ
Aug 28, 2024
Deploy Diverse AI Apps with Multi-LoRA Support on RTX AI PCs and Workstations
Today’s large language models (LLMs) achieve unprecedented results across many use cases. Yet, application developers often need to customize and tune these...
10 MIN READ
Aug 21, 2024
Mistral-NeMo-Minitron 8B Foundation Model Delivers Unparalleled Accuracy
Last month, NVIDIA and Mistral AI unveiled Mistral NeMo 12B, a leading state-of-the-art large language model (LLM). Mistral NeMo 12B consistently outperforms...
5 MIN READ
Aug 14, 2024
Optimizing Inference Efficiency for LLMs at Scale with NVIDIA NIM Microservices
As large language models (LLMs) continue to evolve at an unprecedented pace, enterprises are looking to build generative AI-powered applications that maximize...
8 MIN READ
Aug 07, 2024
Writer Releases Domain-Specific LLMs for Healthcare and Finance
Writer has released two new domain-specific AI models, Palmyra-Med 70B and Palmyra-Fin 70B, expanding the capabilities of NVIDIA NIM. These models bring...
6 MIN READ
Aug 06, 2024
Accelerating Hebrew LLM Performance with NVIDIA TensorRT-LLM
Developing a high-performing Hebrew large language model (LLM) presents distinct challenges stemming from the rich and complex nature of the Hebrew language...
8 MIN READ
Jul 25, 2024
Revolutionizing Code Completion with Codestral Mamba, the Next-Gen Coding LLM
In the rapidly evolving field of generative AI, coding models have become indispensable tools for developers, enhancing productivity and precision in software...
5 MIN READ
Jul 16, 2024
New Workshops: Customize LLMs, Build and Deploy Large Neural Networks
Register now for an instructor-led public workshop in July, August or September. Space is limited.
1 MIN READ
Jul 02, 2024
Achieving High Mixtral 8x7B Performance with NVIDIA H100 Tensor Core GPUs and NVIDIA TensorRT-LLM
As large language models (LLMs) continue to grow in size and complexity, the performance requirements for serving them quickly and cost-effectively continue to...
9 MIN READ
Jul 01, 2024
Google's New Gemma 2 Model Now Optimized and Available on NVIDIA API Catalog
Gemma 2, the next generation of Google Gemma models, is now optimized with TensorRT-LLM and packaged as NVIDIA NIM inference microservice.
1 MIN READ
Jun 28, 2024
Create RAG Applications Using NVIDIA NIM and Haystack on Kubernetes
Step-by-step guide to build robust, scalable RAG apps with Haystack and NVIDIA NIMs on Kubernetes.
1 MIN READ
Jun 12, 2024
Demystifying AI Inference Deployments for Trillion Parameter Large Language Models
AI is transforming every industry, addressing grand human scientific challenges such as precision drug discovery and the development of autonomous vehicles, as...
14 MIN READ
Jun 11, 2024
Maximum Performance and Minimum Footprint for AI Apps with NVIDIA TensorRT Weight-Stripped Engines
NVIDIA TensorRT, an established inference library for data centers, has rapidly emerged as a desirable inference backend for NVIDIA GeForce RTX and NVIDIA RTX...
8 MIN READ
Jun 03, 2024
NVIDIA Collaborates with Hugging Face to Simplify Generative AI Model Deployments
As generative AI experiences rapid growth, the community has stepped up to foster this expansion in two significant ways: swiftly publishing state-of-the-art...
4 MIN READ