AI Models
Explore and deploy top AI models built by the community, accelerated by NVIDIA’s AI inference platform, and run on NVIDIA-accelerated infrastructure.
Llama
Llama is Meta’s collection of open foundation models, most recently made multimodal with the 2025 release of Llama 4. NVIDIA worked with Meta to advance inference of these models with NVIDIA TensorRT™-LLM (TRT-LLM) to get maximum performance from data center GPUs like NVIDIA Blackwell and NVIDIA Hopper™ architecture GPUs. Optimized versions of several Llama models are available as NVIDIA NIM™ microservices for an easy-to-deploy experience. You can also customize Llama with your own data using the end-to-end NVIDIA NeMo™ framework.
Explore
Explore sample applications to learn about different use cases for Llama models.
Integrate
Get started with the right tools and frameworks for your AI model development environment.
Optimize
Optimize inference workloads for large language models (LLMs) with TensorRT-LLM. Learn how to set up and get started using Llama in TRT-LLM.
Get started with the models for your development environment.
Get Production-Ready Llama Models With NVIDIA NIM
The NVIDIA API Catalog enables rapid prototyping with just an API call.
Llama 4 on Ollama
Ollama enable you to deploy Llama 4 quickly to all your GPUs.
Quantized Llama 3.1 8B on Hugging Face
NVIDIA Llama 3.1 8B Instruct is optimized by quantization to FP8 using the open-source TensorRT Model Optimizer library. Compatible with data center and consumer devices.
DeepSeek
DeepSeek is a family of open-source models that features several powerful models using a mixture-of-experts (MoE) architecture and provides advanced reasoning capabilities. DeepSeek models can be optimized for performance using TensorRT-LLM for data center deployments. You can use NIM to try out the models for yourself or customize with the open-source NeMo framework.
Explore
Explore sample applications to learn about different use cases for DeepSeek models.
Integrate
Get started with the right tools and frameworks for your development environment.
Optimize
Optimize inference workloads for LLMs with TensorRT-LLM. Learn how to set up and get started using Llama in TensorRT-LLM.
Quantize Deepseek R1 to FP4 With TensorRT Model Optimizer
TensorRT Model Optimizer now has an experimental feature to deploy to vLLM. Check out the workflow.
Get started with the models for your development environment.
Get Production-Ready DeepSeek Models With NVIDIA NIM.
Rapid prototyping is just an API call away.
NVIDIA DeepSeek R1 FP4
The NVIDIA DeepSeek R1 FP4 model is the quantized version of the DeepSeek R1 model, which is an autoregressive language model that uses an optimized transformer architecture. The NVIDIA DeepSeek R1 FP4 model is quantized with TensorRT Model Optimizer.
DeepSeek on Ollama
Ollama lets you deploy DeepSeek quickly to all your GPUs.
Gemma
Gemma is Google DeepMind’s family of lightweight, open models. Gemma models span a variety of sizes and specialized domains to meet each developer's unique needs. NVIDIA has worked with Google to enable these models to run optimally on a variety of NVIDIA’s platforms, ensuring you get maximum performance on your hardware, from data center GPUs like NVIDIA Blackwell and NVIDIA Hopper architecture chips to Windows RTX and Jetson devices. Enterprise customers can deploy optimized containers using NVIDIA NIM microservices for production-grade support and customize using the end-to-end NeMo framework. With the latest release of Gemma 3n, these models are now natively multilingual and multimodal for your text, image, video, and audio data.
Explore
Explore sample applications to learn about different use cases for Gemma models.
Integrate
Use Gemma on your devices and make it your own.
Read the Blog: Run Google DeepMind’s Gemma 3n on NVIDIA Jetson and RTX
Optimize
Optimize inference workloads for LLMs with TensorRT-LLM. Learn how to set up and get started using Llama in TensorRT-LLM.
Read the Blog: NVIDIA TensorRT-LLM Revs Up Inference for Google Gemma
Get started with the models for your development environment.
Get Started With Gemma Models With NVIDIA NIM
Gemma 3 is now featured on the NVIDIA API Catalog, enabling rapid prototyping with just an API call.
Gemma 3 Models on Ollama
Ollama lets you start experimenting in seconds with the most capable Gemma model that runs on a single NVIDIA H100 Tensor Core GPU.
Gemma-2b-it ONNX INT4
The Gemma-2b-it ONNX INT4 model is quantized with TensorRT Model Optimizer. Easily fine-tune and adapt the model to your unique requirements with Hugging Face’s Transformers library or your preferred development environment.
Phi
Microsoft Phi is a family of Small Language Models (SLMs) that provide efficient performance for commercial and research tasks. These models are trained on high quality training data and excel in mathematical reasoning, code generation, advanced reasoning, summarization, long document QA, and information retrieval. Due to their small size, Phi models can be deployed on devices in single GPU environments, such as Windows RTX and Jetson. With the launch of the Phi-4 series of models, Phi has expanded to include advanced reasoning and multimodality.
Explore
Explore sample applications to learn about different use cases for Phi models.
Integrate
Get started with the right tools and frameworks for your development environment.
Optimize
Optimize inference workloads for LLMs with TensorRT-LLM. Learn how to set up and get started using Llama in TRT-LLM.
Get started with the models for your development environment.
Get Production-Ready Phi Models With NVIDIA NIM
The NVIDIA API Catalog enables rapid prototyping with just an API call
Phi on Ollama
Ollama lets you deploy Phi quickly to all your GPUs.
Phi-3.5-mini-Instruct INT4 ONNX
The Phi-3.5-mini-Instruct INT4 ONNX model is the quantized version of the Microsoft Phi-3.5-mini-Instruct model, which has 3.8 billion parameters.
More Resources
Ethical AI
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.
Try top community models today.