Data Center / Cloud

New Features and Applications Make Deploying Edge AI Easy with NVIDIA Fleet Command

NVIDIA Fleet Command is a cloud service that securely deploys, manages, and scales AI applications across distributed edge infrastructure. Since Fleet Command launched in July, several significant milestones have been achieved and are showcased at NVIDIA GTC.

Fleet Command adds remote console 2.0, MIG support, encrypted file storage, and more

New features are constantly added to Fleet Command. In addition to making the platform more robust and secure, feedback from customers and partners is incorporated to make it easier for organizations to get started with AI at the edge. 

New features available today include:

  • Multi-Instance GPU (MIG) Support: Allows GPUs to be securely partitioned into separate GPU instances. 
  • Remote Console 2.0: Eliminates the need for additional ports and launches faster to provide Just In Time secure remote access.
  • Secure NFS Support: Allows the use of existing encrypted storage solutions.
  • Advanced Networking: Supports proxies, networks without DHCP resiliency to IP address changes and respective observability from the cloud. 

Because Fleet Command is a SaaS platform, capabilities are updated without users needing to upgrade, ensuring every customer always has the best experience possible and has access to immediate innovation.

NVIDIA LaunchPad allows users to fully experience Fleet Command for free

At GTC this year, we announced worldwide availability of NVIDIA LaunchPad. 

NVIDIA LaunchPad is a program that gives enterprises and organizations immediate, short-term access to NVIDIA AI running on accelerated compute to speed up the development and deployment of modern, data-driven applications. It allows you to quickly test and prototype across the entire workflow of AI on the same complete stack you can purchase and deploy. 

Organizations can use LaunchPad to get full access to Fleet Command to experience deploying and scaling AI for free. Users get access to a catalog of optimized models and applications to explore how AI can benefit their organization, a turnkey cloud service to easily deploy and monitor real applications on real servers, accelerated edge computing infrastructure to seamlessly provision and run applications, and more. 

This image shows the 4 stages of AI development starting with data prep, training at scale, then simulation, and deployment at scale. NVIDIA Base Command is preferable for data prep and training at scale while Fleet Command is better for simulation and deployment at scale.
Figure 1. NVIDIA LaunchPad allows users to trial every stage of AI development, from data prep to deploy.

Interested in getting started with Fleet Command on NVIDIA LaunchPad? Apply for immediate access.

Introducing the NVIDIA A2 Tensor Core GPU for inference at the edge

Image shows what the NVIDIA A2 Tensor Core GPU looks like
Figure 2. A2 completes the inference based GPUs with A100 as highest performance compute, A30 as the mainstream compute, and now A2 as the entry level.

Also announced at GTC this year is the NVIDIA A2, for entry-level and edge servers. 

The NVIDIA A2 Tensor Core GPU is optimized for entry-level inference, intelligent video analytics, and NVIDIA AI deployments. This can be especially useful with edge servers constrained by space and thermal requirements, such as 5G edge and industrial environments. A2 delivers a low-profile form factor operating in a low-power envelope, from a TDP of 60W down to 40W, making it ideal for any edge server. 

A2’s versatility, compact size, and low power exceed the demands for edge deployments at scale, instantly upgrading existing entry-level CPU servers to handle inference.  Servers accelerated with A2 GPUs deliver up to 37X higher inference performance compared to CPUs and 1.3X more efficient computer vision deployments than previous GPU generations—all at an entry-level price point. 

A2 is available in leading OEM NVIDIA-Certified Systems™, which deliver breakthrough inference performance across edge, data center, and cloud, ensuring AI-enabled applications deploy with fewer servers, less power, and low cost.  As part of our NVIDIA-Certified program, systems with NVIDIA A2 GPU-based systems are compatible with Fleet Command. 

Two new categories for NVIDIA-Certified Systems at the edge

NVIDIA-Certified Systems now includes categories for edge systems. These are systems that have been certified to provide capabilities for running accelerated applications outside a traditional data center, including:

  • Excellent performance for inference: Systems with the NVIDIA A30 have been shown to provide leading performance on a range of inference workloads, as evidenced by the results of the September 2021 MLPerf Benchmark. A30 Multi-Instance GPU (MIG) support also allows up to four workloads to be run simultaneously with their own guaranteed quality of service, maximizing compute utilization and efficiency. 
  • Low-Power consumption: The new A2 GPU will be included  in the NVIDIA-Certified Systems program soon. It delivers entry-level, low power, compact acceleration for edge AI and inference, and offers the smallest footprint of the NVIDIA enterprise GPU portfolio. 
  • Security: Validation of Trusted Platform Module functionality enables secure boot, disk encryption and the use of digitally signed applications. 

These are two categories of certified edge systems. Enterprise edge systems are meant to be deployed in controlled environments such as a retail store. Industrial edge systems are designed for rugged deployments, such as elevated temperature environments. 

Image shows what an edge system may look like in an industrial environment. The image displays machinery on a factory floor.
Figure 3. New NVIDIA-Certified Systems are ideal for enterprise and industrial edge environments.

Fleet Command is compatible with all NVIDIA-Certified Systems, including enterprise edge and industrial edge systems. Learn more about NVIDIA-Certified Systems

New partners mean new applications are available on Fleet Command

New partners have been added to our ecosystem to provide organizations with brand new choices of applications to deploy at the edge via Fleet Command. By connecting our application partners to our customers building edge solutions, we accelerate the time to AI and streamline processes that could take weeks or months.

Some partners include Data Monsters, which is using computer vision for industrial inspection and manufacturing. Ironyun is employing AI-enabled video analytics for security, access control, and safety and search. Kinetic Vision is building an end-to-end solution for AI-powered computer vision for consumer-packaged goods organizations.

Fleet Command continues to gain significant momentum with both customers and partners. To stay up to date, be sure to check all of the GTC sessions related to Fleet Command and edge computing. Read more about our sessions here

Discuss (4)

Tags