Discover how NVIDIA Metropolis APIs and microservices can accelerate your vision AI applications for the edge on NVIDIA Jetson Orin.   Register now

NVIDIA Metropolis

NVIDIA Metropolis features GPU-accelerated SDKs and developer tools that give you a faster, more cost-effective way to build, deploy, and scale AI-enabled video analytics and IoT applications—from the edge to the cloud.


Get Started Learn About APIs & Microservices
NVIDIA Metropolis includes a hosts of SDK and developer tools

Click to expand

Explore All the Benefits

Faster Builds

Use and customize high-performance, pretrained models, or your own models, to streamline deploying AI applications across a range of industries. Jump-start application development by building off modular microservices and reference applications.

Lower Cost

Powerful SDKs—including NVIDIA TensorRT™, DeepStream, and TAO Toolkit—reduce overall solution cost by maximizing inference throughput and optimizing hardware usage on NVIDIA platforms and infrastructure.

Flexible Deployments

Manage and scale AI deployments securely with NVIDIA Fleet Command™. You can also deploy with flexibility using cloud-native Metropolis Microservices and containerized apps offering options for on-premise, cloud, or hybrid deployments.

Metropolis APIs and Microservices for the Edge

Develop and deploy faster on NVIDIA Jetson.


Image Placeholder

With the rapidly evolving AI landscape, developers are challenged by complex and longer development cycles to build vision AI applications for the edge. NVIDIA Metropolis brings a collection of powerful APIs and microservices for developers to easily develop and deploy applications on the NVIDIA Jetson edge-AI platform.


Read the News Download Software Get-Started

Powerful Tools for
AI-Enabled Video Analytics

The Metropolis suite of SDKs provides a variety of starting points for AI application development and deployment.



NVIDIA Omniverse™ Replicator

Generate physically accurate 3D synthetic data at scale, or build your own synthetic data tools and frameworks. Bootstrap perception AI model training and achieve accurate Sim2Real performance without having to manually curate and label real-world data.


Learn More
 NVIDIA Omniverse replicator generates 3D synthetic data

Pretrained Models

Eliminate the time-consuming process of building models from scratch. Choose from over 100+ permutations of highly accurate models and generic neural network architectures or start with our task-based models to recognize human actions and poses, detect people in crowded spaces, classify vehicles and license plates, and much more.


Learn More Try Pretrained Models With Jupyter Notebook

TAO Toolkit

The Train, Adapt, and Optimize (TAO) Toolkit is a low-code AI model development solution that lets you use the power of transfer learning to fine-tune NVIDIA pretrained models with your own data and optimize for inference–without AI expertise or a large training dataset.


Learn More Try TAO on LaunchPad
Leverage NVIDIA TAO Toolkit to train, adapt, and optimize AI model development
Logos of popular AI model frameworks

Many AI Model Frameworks

Create your AI models and applications on these popular NVIDIA-supported AI frameworks. Integrate any existing AI model into the Metropolis workflow and easily customize existing models in TensorFlow, PyTorch, and more by easily converting to TAO.


Learn More

TensorRT

This SDK for high-performance deep learning inference includes an inference optimizer and runtime that delivers low latency and high throughput, both on edge devices and within the cloud. TensorRT is supported on all popular frameworks, including TensorFlow and PyTorch. Powering NVIDIA solutions such as NVIDIA JetPack™ and DeepStream, TensorRT give you a gateway to accelerated inferencing.


Learn More
TensorRT SDK for high-performance deep learning inference
NVIDIA DeepStream SDK - a complete streaming analytics toolkit

DeepStream SDK

NVIDIA DeepStream SDK is a complete streaming analytics toolkit based on GStreamer for AI-based multi-sensor processing, video, audio, and image understanding. It’s ideal for vision AI developers, software partners, startups, and OEMs building IVA apps and services.


Learn More Try DeepStream on LaunchPad

Triton Inference Server

The NVIDIA Triton™ open-source, multi-framework inference serves software to deploy, run, and scale AI models in production on both GPUs and CPUs. It supports all major frameworks, including TensorFlow and Pytorch, and maximizes inference throughput on any platform.


Learn More
Triton Inference Server maximizes inference throughput on any platform
Video Storage Toolkit (VST) manages and stores footage for large volumes of video cameras

Video Storage Toolkit (VST)

Easily manage and store footage for large volumes of video cameras with hardware-accelerated video decoding, streaming, and storage. Get started quickly with the included web-based user interface and take advantage of VST flexibility through intuitive REST APIs. It’s available for NVIDIA Jetson Xavier™ and Orin™ devices.


Learn More

Metropolis Microservices

This suite of cloud-native microservices and reference applications fast-tracks development and deployment of vision AI applications. Unlock business insights for a wide range of spaces, ranging from roadways to airports to retail stores, in significantly shortened development cycles.


Learn More
Use Metropolis Microservices to develop and deploy vision AI apps
 NVIDIA CUDA-X libraries help with pre-processing and model performance

CUDA-X Libraries

Take advantage of low-level libraries and primitives for computer vision and more that can help with pre-processing and model performance. NVIDIA® CUDA-X™, built on top of NVIDIA CUDA®, is a collection of libraries, tools, and technologies that deliver dramatically higher performance in compute-intensive algorithms spanning complex math, deep learning, and image processing.


Fleet Command

Streamline the provisioning and deployment of systems and AI applications at the edge with NVIDIA Fleet Command. A managed platform for container orchestration, it simplifies the management of distributed computing environments with the scale and resiliency of the cloud, turning every site into a secure, intelligent location.


Learn More Experience Fleet Command on LaunchPad
NVIDIA Fleet Command. 
                deploys vision AI apps at the edge
Video Storage Toolkit (VST) manages and stores footage for large volumes of video cameras

Cloud Containers

Combine NVIDIA SDKs to create containerized applications easily with Docker, Kubernetes, and GPU Operators to deploy cloud-native solutions on Jetson, x86, and dGPU.


Learn More

Generate – Synthetic Data Generation

NVIDIA Omniverse™ Replicator

Generate physically accurate 3D synthetic data at scale, or build your own synthetic data tools and frameworks. Bootstrap perception AI model training and achieve accurate Sim2Real performance without having to manually curate and label real-world data.


Learn More
 NVIDIA Omniverse replicator generates 3D synthetic data

Train – Application-Specific Model Customization

Pretrained models

Eliminate the time-consuming process of building models from scratch. Choose from over 100+ permutations of highly accurate models and generic neural network architectures or start with our task-based models to recognize human actions and poses, detect people in crowded spaces, classify vehicles and license plates, and much more.


Learn More Try Pretrained Models With Jupyter Notebook

TAO Toolkit

The Train, Adapt and Optimize (TAO) Toolkit is a low-code AI model development solution that lets you use the power of transfer learning to fine-tune NVIDIA pretrained models with your own data and optimize for inference–without AI expertise or a large training dataset.


Learn More Try TAO on LaunchPad
Leverage NVIDIA TAO Toolkit to train, adapt, and optimize AI model development
Logos of popular AI model frameworks

Many AI model frameworks

Create your AI models and applications on these popular NVIDIA-supported AI frameworks. Integrate any existing AI model into the Metropolis workflow and easily customize existing models in TensorFlow, PyTorch, and more by easily converting to TAO.


Learn More

Build – Powerful AI applications

TensorRT

This SDK for high-performance deep learning inference includes an inference optimizer and runtime that delivers low latency and high throughput, both on edge devices and within the cloud. TensorRT is supported on all popular frameworks, including TensorFlow and PyTorch. Powering NVIDIA solutions such as JetPack™ and DeepStream, TensorRT is a gateway to accelerated inferencing.


Learn More
TensorRT SDK for high-performance deep learning inference
NVIDIA DeepStream SDK - a complete streaming analytics toolkit

DeepStream SDK

NVIDIA DeepStream SDK is a complete streaming analytics toolkit based on GStreamer for AI-based multi-sensor processing, video, audio, and image understanding. It’s ideal for vision AI developers, software partners, startups, and OEMs building IVA apps and services.


Learn More Try DeepStream on LaunchPad

Triton Inference Server

The NVIDIA Triton™ open-source, multi-framework inference serves software to deploy, run, and scale AI models in production on both GPUs and CPUs. It supports all major frameworks, including TensorFlow and Pytorch, and maximizes inference throughput on any platform.


Learn More
Triton Inference Server maximizes inference throughput on any platform
Video Storage Toolkit (VST) manages and stores footage for large volumes of video cameras

Video Storage Toolkit (VST)

Easily manage and store footage for large volumes of video cameras with hardware-accelerated video decoding, streaming, and storage. Get started quickly with the included web-based user interface and take advantage of VST flexibility through intuitive REST APIs. It’s available for NVIDIA Jetson Xavier™ and Orin™ devices.


Learn More

Metropolis Microservices

This suite of cloud-native microservices and reference applications fast-tracks development and deployment of vision AI applications. Unlock business insights for a wide range of spaces, ranging from roadways to airports to retail stores, in significantly shortened development cycles.


Learn More
Use Metropolis Microservices to develop and deploy vision AI apps
 NVIDIA CUDA-X libraries help with pre-processing and model performance

CUDA-X libraries

Take advantage of low-level libraries and primitives for computer vision and more that can help with pre-processing and model performance. NVIDIA® CUDA-X™, built on top of NVIDIA CUDA®, is a collection of libraries, tools, and technologies that deliver dramatically higher performance in compute-intensive algorithms spanning complex math, deep learning, and image processing.


Deploy – Application Management and Scaling

Fleet Command

Streamline the provisioning and deployment of systems and AI applications at the edge with NVIDIA Fleet Command. A managed platform for container orchestration, it simplifies the management of distributed computing environments with the scale and resiliency of the cloud, turning every site into a secure, intelligent location.


Learn More Experience Fleet Command on LaunchPad
NVIDIA Fleet Command. 
                        deploys vision AI apps at the edge
Video Storage Toolkit (VST) manages and stores footage for large volumes of video cameras

Cloud containers

Combine NVIDIA SDKs to create containerized applications easily with Docker, Kubernetes, and GPU Operators to deploy cloud-native solutions on Jetson, x86, and dGPU.


Learn More

Get Started With Sample Applications

Use TAO and Deepstream for action recognition app

Action Recognition

Learn how to develop and deploy a no-code action recognition application using TAO and DeepStream.

Read the Blog Try it Out
 Integrate TAO with DeepStream for face mask detection

Face Mask Detection

Integrate TAO with DeepStream for a 10X reduction in development time when creating a real-time face-detection edge application.

Read the Blog Try it Out
Use TAO toolkit to optimize pose estimation

Pose Estimation

Learn how to create a gesture recognition application with robot interactions. Also, train and optimize a 2D pose estimation model with NVIDIA TAO Toolkit.

Read the Blog
Use TAO with Deepstream for number plate detection

Number Plate Detection

Learn how to combine TAO with DeepStream for a license plate detection and understanding app.

Read the Blog

View all Metropolis technical blogs

Explore NVIDIA GTC Talks On-Demand

Develop, deploy, and scale AI-enabled video analytics applications with NVIDIA Metropolis.


Get Started