AI Models

Explore and deploy top AI models built by the community, accelerated by NVIDIA’s AI inference platform, and run on NVIDIA-accelerated infrastructure.

Explore ModelsView Performance


AI Model - Meta’s Llama logoLlama

Llama is Meta’s collection of open foundation models, most recently made multimodal with the 2025 release of Llama 4. NVIDIA worked with Meta to advance inference of these models with NVIDIA TensorRT™-LLM (TRT-LLM) to get maximum performance from data center GPUs like NVIDIA Blackwell and NVIDIA Hopper™ architecture GPUs. Optimized versions of several Llama models are available as NVIDIA NIM™ microservices for an easy-to-deploy experience. You can also customize Llama with your own data using the end-to-end NVIDIA NeMo™ framework.

Get started with the models for your development environment.

Model

Get Production-Ready Llama Models With NVIDIA NIM

The NVIDIA API Catalog enables rapid prototyping with just an API call.

Model

Llama 4 on Ollama

Ollama enable you to deploy Llama 4 quickly to all your GPUs.

Model

Quantized Llama 3.1 8B on Hugging Face

NVIDIA Llama 3.1 8B Instruct is optimized by quantization to FP8 using the open-source TensorRT Model Optimizer library. Compatible with data center and consumer devices.


View More Family Models

AI Model - DeepSeek logoDeepSeek

DeepSeek is a family of open-source models that features several powerful models using a mixture-of-experts (MoE) architecture and provides advanced reasoning capabilities. DeepSeek models can be optimized for performance using TensorRT-LLM for data center deployments. You can use NIM to try out the models for yourself or customize with the open-source NeMo framework.

Integrate

Get started with the right tools and frameworks for your development environment.

Optimize

Optimize inference workloads for LLMs with TensorRT-LLM. Learn how to set up and get started using Llama in TensorRT-LLM.

Get started with the models for your development environment.

Model

Get Production-Ready DeepSeek Models With NVIDIA NIM.

Rapid prototyping is just an API call away. 

Model

NVIDIA DeepSeek R1 FP4

The NVIDIA DeepSeek R1 FP4 model is the quantized version of the DeepSeek R1 model, which is an autoregressive language model that uses an optimized transformer architecture. The NVIDIA DeepSeek R1 FP4 model is quantized with TensorRT Model Optimizer.

Model

DeepSeek on Ollama

Ollama lets you deploy DeepSeek quickly to all your GPUs.

View More Family Models

AI Model - Google DeepMind’s Gemma logoGemma

Gemma is Google DeepMind’s family of lightweight, open models. Gemma models span a variety of sizes and specialized domains to meet each developer's unique needs. NVIDIA has worked with Google to enable these models to run optimally on a variety of NVIDIA’s platforms, ensuring you get maximum performance on your hardware, from data center GPUs like NVIDIA Blackwell and NVIDIA Hopper architecture chips to Windows RTX and Jetson devices. Enterprise customers can deploy optimized containers using NVIDIA NIM microservices for production-grade support and customize using the end-to-end NeMo framework. With the latest release of Gemma 3n, these models are now natively multilingual and multimodal for your text, image, video, and audio data.

Get started with the models for your development environment.

Model

Get Started With Gemma Models With NVIDIA NIM

Gemma 3 is now featured on the NVIDIA API Catalog, enabling rapid prototyping with just an API call.

Model

Gemma 3 Models on Ollama

Ollama lets you start experimenting in seconds with the most capable Gemma model that runs on a single NVIDIA H100 Tensor Core GPU.
 

Model

Gemma-2b-it ONNX INT4

The Gemma-2b-it ONNX INT4 model is quantized with TensorRT Model Optimizer. Easily fine-tune and adapt the model to your unique requirements with Hugging Face’s Transformers library or your preferred development environment.

View More Family Models

AI Model - Microsoft Phi logoPhi

Microsoft Phi is a family of Small Language Models (SLMs) that provide efficient performance for commercial and research tasks. These models are trained on high quality training data and excel in mathematical reasoning, code generation, advanced reasoning, summarization, long document QA, and information retrieval. Due to their small size, Phi models can be deployed on devices in single GPU environments, such as Windows RTX and Jetson. With the launch of the Phi-4 series of models, Phi has expanded to include advanced reasoning and multimodality.

Integrate

Get started with the right tools and frameworks for your development environment.

Optimize

Optimize inference workloads for LLMs with TensorRT-LLM. Learn how to set up and get started using Llama in TRT-LLM.

Get started with the models for your development environment.

Model

Get Production-Ready Phi Models With NVIDIA NIM

The NVIDIA API Catalog enables rapid prototyping with just an API call

Model

Phi on Ollama

Ollama lets you deploy Phi quickly to all your GPUs.

Model

Phi-3.5-mini-Instruct INT4 ONNX

The Phi-3.5-mini-Instruct INT4 ONNX model is the quantized version of the Microsoft Phi-3.5-mini-Instruct model, which has 3.8 billion parameters.

View More Family Models

More Resources

Decorative image representing Developer Community

Join the NVIDIA Developer Program

 Decorative image representing Training and Certification

Get Training and Certification

Decorative image representing Inception for Startups

Accelerate Your Startup


Ethical AI

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here.

Try top community models today.

Contact Us