Edge Computing

Scalable AI Sensor Streaming with Multi-GPU and Multi-Node Capabilities in NVIDIA Holoscan 0.6

Decorative image showing NVIDIA Holoscan use cases.

Demand for real-time insights and autonomous decision-making is growing in various industries. To meet this demand, we need scalable edge-solution platforms that can effectively process AI-enabled sensor data right at the source and scale out to on-premises or cloud compute resources.

However, developers face many challenges in using AI and sensor processing at the edge:

  • Real-time latency requirements
  • The complexity of building and maintaining custom pipelines for AI-enabled sensor processing
  • The need for hardware-agnostic solutions to meet heterogeneous hardware needs at the edge
  • Multimodality and processing of various sensory modalities
  • Integration from edge to on-premises to a cloud-distributed network
  • Long-term whole-stack stability

Before NVIDIA Holoscan, no singular platform offered a comprehensive solution that effectively addressed the multitude of edge AI challenges. By seamlessly integrating data movement, accelerated computing, real-time visualization, and AI inferencing, Holoscan ensures optimal application performance. It abstracts away complexities for developers, reduces time to market, and offers the convenience of coding in Python and C++, all in a low-code, high-performance infrastructure.

“The Holoscan platform enables new SaMD (software as a medical device) to be quickly productized with seamless integration with the development environment to enable SaMD productization and fast-track deployment,” said Nhan Ngo Dinh, president of Cosmo Intelligent Medical Devices. “This accelerates the time to market with integrated development programmable using the NVIDIA Holoscan SDK, easy to transition from development to production.”

For edge developers, navigating the heterogeneous landscape of edge devices with varying hardware requirements and architectures can be daunting. Holoscan simplifies this complexity through its hardware-agnostic approach.

The platform provides a unified stack, accommodating a wide range of devices from x86 to aarch64 and NVIDIA Jetson Orin AGX to NVIDIA IGX Orin, catering to different power, size, cost, compute, and configuration needs. This versatility liberates you from hardware constraints while promoting interoperability, maintainability, and scalability across applications.

The v0.6 release of NVIDIA Holoscan introduces new features that empower you to reach new levels of scalability, productivity, and ease of use when building AI-streaming solutions. Specifically, this new set of features enables the following benefits:

  • Scalability at the edge through a distributed computing architecture
  • Portability, collaboration, and interoperability for platform development
  • Advanced profiling with data frame flow tracking for optimal performance

Scalability through distributed computing architecture

For developers with heavy workloads or interested in scaling up and out, the Holoscan v0.6 multi-GPU, multi-node support enables distributed computing. Specifically, you can now deploy distributed applications and use all the resources available on the edge with multiple GPUs on a single node. Or you can deploy a single Holoscan application on separate physical nodes with optimized network communication.

Multi-GPU, multi-node enables sensor processing applications to scale with ever-increasing compute requirements and grants you more flexibility and scalability in your designs. For users, it opens new possibilities with increased processing power, parallel processing, separating workloads based on criticality, post-deployment scale-up without replacing existing units, fault tolerance, and reliability.

Diagram shows operators divided among fragment APIs.
Figure 1. Distributed applications scaled out with multi-fragment APIs

Portability, collaboration, and interoperability for platform developers

If you are developing a platform instead of creating standalone products from scratch, Holoscan v0.6 provides a set of new features enabling more scalability and flexibility for broader use and ease of integration:

  • App packager: Grants portability through the easy containerization and deployment of the apps, enhancing collaboration and contributions to the platform.
  • Multi-backend: Streamlines the transition from model training to AI app building enabled by the plug-and-play deployment of already trained PyTorch models.
  • Holoviz volumetric rendering​: Provides built-in volumetric rendering in support of medical imaging visualization, available on HoloHub as operator and application.

Advanced profiling with data frame flow tracking

The upcoming release includes a data frame flow tracking​ feature that enables you to measure performance through data frame tracking so that you can quickly adjust bottlenecks. In addition, the new multithreaded scheduler enables applications to run operators in parallel. As a result, the applications have optimized the use of system resources.

Use cases and success stories

The NVIDIA Holoscan SDK has rapidly evolved into an accelerated, full-stack infrastructure, making significant contributions to scalable, software-defined, and real-time processing across various domains.

The following healthcare companies have embraced and built on this technology:

  • Medtronic, the largest medical device company in the world, is building a next-generation AI-assisted colonoscopy system (pending FDA approval) on the Holoscan platform.
  • Moon Surgical, a Paris-based robotic surgery company, is finalizing the productization of Maestro, an accessible, adaptive surgical-assistant robotics system built on Holoscan and IGX.
  • ORSI Academy, a surgical training center in Belgium, has used Holoscan to support first-in-human real-world, robot-assisted surgery for critical operations like the removal of cancerous kidneys.

“Holoscan abstracts away the challenges in building safe and reliable sensor data processing pipelines for medical device applications. Traditionally such pipelines required dedicated teams of specialists to develop and maintain,” said David Noonan, CTO at Moon Surgical. Using Holoscan, we’re able to launch innovative features for our Maestro surgical robotics system with short development timelines while maintaining a lean R&D team.”

Other use cases include AR/VR, radar technology, and scientific instrumentation.

Magic Leap is working on a Holoscan-based solution that combines AI with true augmented reality to revolutionize physician and surgeon training, visualization, and complex procedure performance.

Researchers from the Georgia Tech Research Institute used Holoscan to develop a real-time radar application used in defense, aerospace, meteorology, navigation, and surveillance.

  • NVIDIA and Analog Devices collaborated to build a 5G Instrumentation application that leverages Holoscan for compute-intensive signal processing at over 120 Gbps on the NVIDIA IGX platform.
  • At Diamond Light Source, a world-renowned synchrotron in the UK, developers used Holoscan and Jax-based Holoscan operators to easily connect Holoscan to existing Ptychography software libraries to speed up image processing and reconstruction.

“With Holoscan, we’ve created an end-to-end data streaming pipeline that enables live ptychographic image processing at the I08-1 beam line, considerably enriching the overall user interaction,” said Paul Quinn, Imaging and Microscopy Science group leader, Diamond Light Source.

Get started with Holoscan 0.6

The release of NVIDIA Holoscan 0.6 marks a significant milestone in the development of edge AI solutions, offering unprecedented scalability, flexibility, and performance. With its diverse range of applications and success stories, Holoscan is shaping the future of AI-enabled sensor processing at the edge, opening new possibilities for various industries worldwide.

To get started developing, see the NVIDIA Holoscan SDK.

Discuss (0)

Tags