Target Container Support#
NVIDIA DriveOS provides a Docker containerization service as part of the NVIDIA DriveOS Linux Guest OS. The service integrates both the Docker runtime environment as well as the NVIDIA Container Toolkit to provide containerization with iGPU access on the target.
Docker services contain the following components:
Docker Runtime
NVIDIA Container Toolkit
Sample: How to Launch
Support for Other Applications
Docker Runtime#
Docker is a high-level API that complements kernel namespacing to provide a Linux container runtime with strong guarantees of isolation and repeatability. The runtime provides both a daemon, which implements the containerization, and a client to interact with said daemon.
The Docker client can be run by executing sudo docker on the target. You can confirm that the daemon is running by executing sudo service docker status.
The Docker runtime present in the NVIDIA DriveOS Linux Guest OS is from upstream Ubuntu Canonical repositories and is considered experimental on non-amd64 architectures.
To learn more in depth about Docker, the runtime, and features, visit https://docker.com.
NVIDIA Container Toolkit#
Note
NVIDIA Container Toolkit is not provided as part of the installation on the target. Visit the NVIDIA Container Toolkit official documentation Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.17.0 documentation and follow instructions for installation and configuration of the latest NVIDIA Container Toolkit on the Linux Guest OS. Internet access on the platform will be required.
The NVIDIA Container Runtime uses this specification to determine which directories, devices, libraries, and files to make available to a Docker container at runtime. Through the Mount Plugins Specification, it is possible to run GPU-accelerated applications on the target within a Docker container that would otherwise have been isolated from the iGPU.
/etc/nvidia-container-runtime/host-files-for-container.d/devices.csv
Specifies the list devices for the Mount Plugins Specification.
/etc/nvidia-container-runtime/host-files-for-container.d/drivers.csv
Specifies the libraries for the Mount Plugins Specification
The preceding two files are provided as part of the NVIDIA DriveOS Linux Guest OS file system to enable out-of-the-box support for running GPU-accelerated applications on the target within Docker. They provide instructions to the NVIDIA Container Toolkit for mounting the required directories, devices, libraries, and so on, into Docker.
The section below goes through a simple step-by-step guide on how to use Docker on the target and use the NVIDIA Container Runtime to run a GPU-accelerated CUDA sample from within Docker.
Sample: How to Launch#
The following sample steps will create and run a sample application called cudaQuery, which is ported from CUDA’s deviceQuery application. The following also assumes that the platform is connected to the Internet. Internet access will be required for step 3 as the Docker runtime will attempt to pull or download the ubuntu:24.04 image from Docker’s repository.
In any working directory create the
cudaQueryfile with the following content and compile it.
#include <cuda_runtime.h>
#include <iostream>
int main(void) {
std::cout << "Running Sample Cuda Query... " << std::endl;
int dev = 0;
cudaSetDevice(dev);
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, dev);
std::cout << std::endl << "Device " << dev << ": " << deviceProp.name << std::endl;
std::cout << "CUDA Capability major/minor: " << deviceProp.major << "." << deviceProp.minor << std::endl;
exit(EXIT_SUCCESS);
}
Compile the sample application.
/usr/local/cuda/bin/nvcc -ccbin g++ -o cudaQuery cudaQuery.cpp
Run the
cudaQuerysample and confirm successful output for GPU device information:
./cudaQuery
Create a file called
Dockerfilein the same working directory with the following content:
FROM ubuntu:24.04
ADD cudaQuery /
CMD ["/cudaQuery"]
Execute Docker with the following command to build the sample container:
sudo docker build -t test:sample .
Execute the following command to run the sample from within Docker. It should print to console the same output that was printed in step 2.
sudo docker run --rm --privileged --net host --runtime nvidia --gpus all test:sample
The docker build command might take a few moments as it will need to pull the Ubuntu 24.04 Docker image and start a Docker terminal session.
Support for Other Applications#
The devices.csv and drivers.csv files are configured out of the box to support successfully running the cudaQuery sample application within Docker. Supporting other GPU-accelerated applications only requires adding the appropriate paths to dependent libraries, devices, directories, and so on, to the devices.csv and drivers.csv files.
To run through an example workflow, use the NVIDIA DriveWorks hello_world sample application.