CV-CUDA Early Access

CV-CUDA is an open source library that enables developers to build highly efficient, GPU-accelerated pre- and post-processing pipelines in cloud-scale Artificial Intelligence (AI) imaging and computer vision (CV) workloads. With a specialized set of CV and image processing kernels that are hand-optimized for performance on data center GPUs, CV-CUDA assures that your processing pipelines built with these kernels are being executed to deliver a much higher throughput across the entire complex workload. CV-CUDA can offer greater than 4x throughput improvement for the end-to-end pipeline thus significantly lowering cloud computing cost and energy consumption. CV-CUDA offers easy integration into C/C++, Python, and interfaces to common Deep Learning (DL) frameworks like PyTorch.

Key Features:

  • A unified, specialized set of highly performant CV kernels
  • C, C++, and Python APIs
  • Batching support, with variable shape images
  • Zero-copy interfaces to PyTorch and TensorFlow
  • Triton Inference Server example using CV-CUDA and TensorRT
  • End-to-end GPU-accelerated object detection, segmentation, and classification examples
  • Use Cases:

    Common use cases with AI imaging and CV workloads deployed at scale in the cloud include:

  • mapping
  • Generative AI
  • three-dimensional (3D) worlds
  • image understanding
  • recommender systems
  • video conferencing and video content enhancement
  • Enterprise Developer Engagement:

    We are providing limited, direct support to select enterprises using CV-CUDA. Please fill out the short application using the link below and provide details about how you are using CV-CUDA for consideration. You must be a member of the NVIDIA Developer Program and logged in with your organization's email address. We will not engage applications from personal email accounts.

    Open Beta will be released Spring of 2023.

    CV-CUDA Early Access Developer Application: