Build and Run Sample Applications for DRIVE OS 6.x Linux

Note: To access the GPU from a Docker Container, please ensure that you have NVIDIA Container Toolkit installed. Installation instructions for NVIDIA Container Toolkit can be found at https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html for your host development environment distribution. An NVIDIA GPU and appropriate CUDA drivers must also be available on the host to run GPU-accelerated applications, including samples.
Note: If you are compiling and running samples in Docker and you want to preserve the compiled samples, please keep in mind that Docker containers are a temporary environment that will essentially be deleted when the container is stopped. To preserve the changes made in a running container, please refer to the Docker official documentation on committing Docker images: https://docs.docker.com/engine/reference/commandline/commit.

Alternatively, if the container was started with a mounted ${WORKSPACE}, you may copy or move the compiled binaries to the mounted ${WORKSPACE} before exiting the container so that the compiled binaries will be available on the host.

Graphics Applications

For boards that are pre-flashed or flashed with the driveos-oobe-[desktop] file system, the Bubble sample application is available in the /opt/nvidia/drive-linux/samples/ folder.

To run a basic X11 sample, use the following commands:

$ sudo -b X -noreset -ac 
$ export DISPLAY=:0 
$ cd /opt/nvidia/drive-linux/samples/opengles2/bubble/x11 
$ ./bubble -fps 

To run other graphics samples for X11 and the supported window systems, see Building and Running Samples and Window Systems.

CUDA

Documentation is available online: CUDA Toolkit Documentation v11.4.1.

  • The installations of CUDA Host x86 and Linux AArch64 are in the /usr/local/cuda-11.4/ folder.
  • The source code files of all CUDA samples are in the /usr/local/cuda-11.4/samples folder.

How to Build the CUDA Samples for the Linux Target

Host cross-compile is supported for DRIVE OS releases only. After you finish installing CUDA x86 and cross-compile packages, perform the following steps:

  1. Install the CUDA sample sources to a directory where you do not need root privileges to write, such as the $HOME directory as shown in the following example:

    $ cd ~/
    • If you installed the driveos-oobe-desktop filesystem, use the following command to run the cuda-install-samples-11.4.sh script from the CUDA installation in the target file system:

      $ <$NV_WORKSPACE>/drive-linux/filesystem/targetfs/usr/local/cuda-11.4/bin/cuda-install-samples-11.4.sh .
    • If you installed the default Debian local repo, install from the host CUDA directory and remove the nvJPEG folders:

      $ /usr/local/cuda-11.4/bin/cuda-install-samples-11.4.sh .
      $ cd ~/NVIDIA_CUDA-11.4_Samples/7_CUDALibraries/
      $ rm -r nvJPEG*
      $ cd ~/NVIDIA_CUDA-11.4_Samples
  2. For Debians and Dockers:

    sudo make SMS=87 TARGET_ARCH=aarch64 TARGET_OS=linux TARGET_FS=$NV_WORKSPACE/drive-linux_src/filesystem/targetfs

    Where $NV_WORKSPACE is /drive for Docker Containers and <install_path>/DRIVE_OS_6.0.6_SDK_Linux_DRIVE_AGX_ORIN_DEVKITS/DRIVEOS/ for SDK Manager.

Note: This will build the samples for the Orin GPU (SMS=87). To build for other GPUs, replace the SMS value with the target compute version.    

How to Run the CUDA Samples

To run a CUDA sample application,

  1. Copy the sample files of your choice to the target.

  2. From the target, run the sample application.

For example:

$ cd ~/
$ rcp -r <username>@<host ip address>:/path/to/NVIDIA_CUDA-11.4_Samples/ .
$ cd NVIDIA_CUDA-11.4_Samples/1_Utilities/deviceQuery
$ ./deviceQueryDrv // you can launch any sample application

From the target:

$ cd ~/cuda_samples
$ ./deviceQueryDrv 

TensorRT

After you finish installing the TensorRT packages, the /usr/src/tensorrt folder is created on the development host.

For more information about:

How to Build the TensorRT Samples

On the development host:

$ cd /usr/src/tensorrt/samples
$ sudo make TARGET=aarch64

How to Run the TensorRT Samples on the Target

To run a TensorRT sample application,

  1. Copy the sample files of your choice to the target.

  2. From the target, run the sample application.

For example, from the host:

$ scp -r /usr/src/tensorrt <username>@<target ip address>:/home/nvidia/tensorrt

From the target:

$ cd ~/
$ rcp -r <username>@<host ip address>:/usr/src/tensorrt/bin/ .
$ ./bin/sample_algorithm_selector

For further information, the TensorRT Developer guide and API reference documents are available at https://docs.nvidia.com/deeplearning/tensorrt/index.html, including sample cross-compile information at https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#cross-compiling-linux.