Docker Services

NVIDIA DRIVE® OS provides a Docker containerization service as part of the NVIDIA DRIVE OS Linux Guest VM. The service integrates both the Docker runtime environment as well as the NVIDIA Container Toolkit to provide containerization with iGPU access on the target.

Note: Docker services on the target are still considered experimental. It is recommended for use only for development purposes and not intended for production.

Docker Runtime

Docker is a high-level API that complements kernel namespacing to provide a Linux container runtime with strong guarantees of isolation and repeatability. The runtime provides both a daemon, which implements the containerization, and a client to interact with said daemon.

The Docker client can be run by executing sudo docker on the target. You can confirm that the daemon is running by executing sudo service docker status.

The Docker runtime present in the DRIVE OS Linux Guest VM is from upstream Ubuntu Canonical repositories and is considered experimental on non-amd64 architectures.

To learn more in depth about Docker, the runtime, and features, visit https://docker.com.

NVIDIA Container Toolkit

The NVIDIA Container Runtime uses this specification to determine which directories, devices, libraries, and files to make available to a Docker container at runtime. Through the Mount Plugins Specification, it is possible to run GPU-accelerated applications on the target within a Docker container that would otherwise have been isolated from the iGPU.

  • /etc/nvidia-container-runtime/host-files-for-container.d/devices.csv

    Specifies the list devices for the Mount Plugins Specification.

  • /etc/nvidia-container-runtime/host-files-for-container.d/drivers.csv

    Specifies the libraries for the Mount Plugins Specification

The preceding two files are provided as part of the DRIVE OS Linux Guest VM filesystem to enable out-of-the-box support for running CUDA samples on the target within Docker. They provide instructions for the Mount Plugins Specification for mounting the required directories, devices, libraries, and so on, into Docker.

The section below goes through a simple step-by-step guide on how to use Docker on the target and use the NVIDIA Container Runtime to run a GPU-accelerated CUDA sample from within Docker.

Sample: How to Launch

The following sample uses the deviceQuery application that is provided as part of the CUDA installation on the target. The following also assumes that the platform is connected to the Internet. Internet access will be required for step 3 as the Docker runtime will attempt to pull or download the ubuntu:20.04 image from Docker’s repository.

  1. Change directory to the path for the deviceQuery sample and compile it.

    cd /usr/local/cuda-11.4/samples/1_Utilities/deviceQuery/ && sudo make
  2. Run the deviceQuery sample and confirm successful output for GPU device information:

    ./deviceQuery
  3. Execute Docker with the given sample to confirm iGPU access from within Docker. It should print to console the same output that was printed in step 2.

    sudo docker run --rm --runtime nvidia --gpus all -v $(pwd):$(pwd) -w $(pwd) ubuntu:20.04 ./deviceQuery

    This command might take a few moments as it will need to pull the Ubuntu 20.04 Docker image and start a Docker terminal session.

Supporting Other Applications

The devices.csv and drivers.csv files are configured out of the box to support successfully running the deviceQuery CUDA sample application within Docker. Supporting other GPU-accelerated applications only requires adding the appropriate paths to dependent libraries, devices, directories, and so on, to the devices.csv and drivers.csv files.

Please follow the template already present in those files while making necessary changes to support your other applications.