Data Center / Cloud

Harnessing the Power of NVIDIA AI Enterprise on Azure Machine Learning

AI is transforming industries, automating processes, and opening new opportunities for innovation in the rapidly evolving technological landscape. As more businesses recognize the value of incorporating AI into their operations, they face the challenge of implementing these technologies efficiently, effectively, and reliably. 

Enter NVIDIA AI Enterprise, a comprehensive software suite designed to help organizations implement enterprise-ready AI, machine learning (ML), and data analytics at scale with security, reliability, API stability, and enterprise-grade support.

What is NVIDIA AI Enterprise?

Deploying AI solutions can be complex, requiring specialized hardware and software, as well as expert knowledge to develop and maintain these systems. NVIDIA AI Enterprise addresses these challenges by providing a complete ecosystem of tools, libraries, frameworks, and support services tailored for enterprise environments. 

With GPU-accelerated computing capabilities, NVIDIA AI Enterprise enables enterprises to run AI workloads more efficiently, cost effectively, and at scale. NVIDIA AI Enterprise is built on top of the NVIDIA CUDA-X AI software stack, providing high-performance GPU-accelerated computing capabilities. 

The suite includes:

  1. VMI: A preconfigured virtual machine image that includes the necessary drivers and software to support GPU-accelerated AI workloads in the major clouds.
  2. AI frameworks: Software that can run in a VMI (such as PyTorch, TensorFlow, RAPIDS, NVIDIA Triton with TensorRT and ONNX support, and more) that serves as the basis for AI development and deployment.
  3. Pretrained models: Models that can be used as-is, or fine-tuned on enterprise-relevant data.
  4. AI workflows: Prepackaged reference examples that illustrate how AI frameworks and pretrained models can be leveraged to build AI solutions to solve common business problems. These workflows provide guidance around fine-tuning pretrained models and AI model creation to build on NVIDIA frameworks. The pipelines to create applications are highlighted, as well as opinions on how to deploy customized applications and integrate them with various components typically found in enterprise environments, such as software for orchestration and management, storage, security, and networking. Available AI workflows include:
  • Intelligent virtual assistant: Engaging around-the-clock contact center assistance for lower operational costs.
  • Audio transcription: World-class, accurate transcripts based on GPU-optimized models.
  • Digital fingerprinting threat detection: Cybersecurity threat detection and alert prioritization to identify and act faster.
  • Next item prediction: Personalized product recommendations for increased customer engagement and retention.
  • Route optimization: Vehicle and robot routing optimization to reduce travel times and fuel costs.

Supported software with release branches

One of the main benefits of using the software available in NVIDIA AI Enterprise is that it is supported by NVIDIA with security and stability as guiding principles. NVIDIA AI Enterprise includes three release branches to cater to varying requirements across industries and use cases:

  1. Latest Release Branch: Geared towards those needing top-of-the-tree software optimizations, this branch will have a monthly release cadence, ensuring users have access to the latest features and improvements. CVE patches, along with bug fixes, will also be included in roll-forward releases.
  2. Production Release Branch: Designed for environments that prioritize API stability, this branch will receive monthly CVE patches and bug fixes, with two new branches introduced each year, each having a 9-month lifespan. To ensure seamless transitions and support, there will be a 3-month overlap period between two consecutive production branches. Production branches will be available in the second half of 2023.
  3. Long-Term Release Branch: Tailored for highly regulated industries where long-term support is paramount, this branch will receive quarterly CVE patches and bug fixes and offers up to 3 years of support for a particular release. Complementing this long-term stability is a 6-month overlap period to ensure smooth transitions between versions, thus providing the longevity and consistency needed for these highly regulated industries.
Diagram depicting the three release branches of NVIDIA AI Enterprise: Latest Release Branch, Production Release Branch, and Long-Term Release Branch
Figure 1. The three release branches of NVIDIA AI Enterprise serve varying requirements across industries and use cases

How to use NVIDIA AI Enterprise with Microsoft Azure Machine Learning

Microsoft Azure Machine Learning is a platform for AI development in the cloud and on premises, including a service for training, experimentation, deploying, and monitoring models, as well as designing and constructing prompt flows for large language models. An open platform, Azure Machine Learning supports all popular machine learning frameworks and toolkits, including those from NVIDIA AI Enterprise. 

This collaboration optimizes the experience of running NVIDIA AI software by integrating it with the Azure Machine Learning training and inference platform. Users no longer need to spend time setting up training environments, installing packages, writing training code, logging training metrics, and deploying models. With this integration, users will be able to leverage the power of NVIDIA enterprise-ready software, complementing Azure Machine Learning’s high performance and secure infrastructure, to build production-ready AI workflows. 

To get started today, follow these steps:

1. Sign in to Microsoft Azure and launch Azure Machine Learning Studio.

2. View and access all prebuilt NVIDIA AI Enterprise Components, Environments, and Models from the NVIDIA AI Enterprise Preview Registry (Figure 2).

Screenshot depicting the NVIDIA AI Enterprise Preview Registry in Azure Machine Learning, which includes NVIDIA-maintained Components, Environments, Models, and more
 Figure 2. NVIDIA AI Enterprise Preview Registry on Azure Machine Learning

3. Use these assets from within a workspace to create ML pipelines within the designer through simple drag and drop (Figure 3). 

Screenshot of Azure Machine Learning creating pipelines with NVIDIA AI Enterprise components.
Figure 3. Pipelines in Azure Machine Learning using NVIDIA AI Enterprise components

Find NVIDIA AI Enterprise sample assets in the Azure Machine Learning registry. Visit NVIDIA_AI_Enterprise_AzureML on GitHub to find code for the preview assets.

Use case: Body pose estimation 

Using the various elements within the NVIDIA AI Enterprise Preview Registry is easy. This example showcases a computer vision task that uses NVIDIA DeepStream for body pose estimation. NVIDIA TAO Toolkit provides the basis for the body pose model and the ability to refine it with new data.

Figure 4 shows a video analytics pipeline example running the NVIDIA DeepStream sample app for body pose estimation. It runs on a GPU cluster and can be easily adapted to leverage updated models and videos, unlocking the power of the Azure Machine Learning platform.

Screenshot showing how to create a pipeline in Azure Machine Learning for body pose estimation with NVIDIA AI Enterprise components
Figure 4. NVIDIA TAO Toolkit and NVIDIA DeepStream for body pose estimation with Azure Machine Learning

The example includes two URI-based data assets created for storing the inputs for the DeepStream sample app command component. The data assets leverage a pretrained model, which is readily available in the NVIDIA AI Enterprise Registry. They also include additional calibration and label information.

The DeepStream body pose command component is configured to use Microsoft Azure blob storage. This component monitors the input directory for any new video files that require inference. When a video file appears, the component picks it up and performs body pose inference. The outputted video includes bounding boxes and tracking lines and is stored in an output directory.

Additional samples available within the registry include:

  • bodyposenet
  • citysemsegformer
  • dashcamnet
  • emotionnet
  • fpenet
  • gazenet
  • gesturenet
  • lprnet
  • peoplenet
  • peoplenet_transformer
  • peoplesemsegnet
  • reidentificationnet
  • retail_object_detection
  • retail_object_recognition
  • trafficcamnet

Each of these samples can be improved with a TAO Toolkit-based training pipeline, which performs transfer learning. The model output changes to fit a specific use case. You can find TAO Toolkit computer vision sample workflows on NGC.

Get started with NVIDIA AI Enterprise on Azure Machine Learning

NVIDIA AI Enterprise and Azure Machine Learning together create a powerful combination of GPU-accelerated computing and a comprehensive cloud-based machine learning platform, enabling businesses to develop and deploy AI models more efficiently. This synergy enables enterprises to harness the flexibility of cloud resources while leveraging the performance advantages of NVIDIA GPUs and software. 

To get started with NVIDIA AI Enterprise on Azure Machine Learning, sign up for a Tech Preview. This will give you access to all of the prebuilt Components, Environments, and Models from the NVIDIA AI Enterprise Preview Registry on Azure Machine Learning.

Discuss (0)

Tags