Build and Run Sample Applications for DRIVE OS 6.x Linux

Note: To access the GPU from a Docker Container, please ensure that you have NVIDIA Container Toolkit installed. Installation instructions for NVIDIA Container Toolkit can be found at for your host development environment distribution. An NVIDIA GPU and appropriate CUDA drivers must also be available on the host to run GPU-accelerated applications, including samples.
Note: If you are compiling and running samples in Docker and you want to preserve the compiled samples, please keep in mind that Docker containers are a temporary environment that will essentially be deleted when the container is stopped. To preserve the changes made in a running container, please refer to the Docker official documentation on committing Docker images:

Alternatively, if the container was started with a mounted ${WORKSPACE}, you may copy or move the compiled binaries to the mounted ${WORKSPACE} before exiting the container so that the compiled binaries will be available on the host.

Graphics Applications

For the pre-flashed boards and boards flashed with the DRIVE OS Docker container, samples are in opt/nvidia/. To run a basic x11 sample, do the following:

$ cd /opt/nvidia/drive-linux/samples/opengles2/bubble/x11/
$ ./bubble -fps


Documentation is available online: CUDA Toolkit Documentation v11.4.1.

CUDA Host x86 and Linux aarch64 is installed on the Development Host in /usr/local/cuda-11.4/.

All CUDA samples are available in source code in /usr/local/cuda-11.4/samples.

How to Build the CUDA Samples for the Linux Target

For DRIVE OS releases, only Host cross-compile is supported. Perform the following steps:

  1. From a terminal, cd on the target to the samples directory.

    $ cd /opt/nvidia/drive-linux/NVIDIA_CUDA-11.4_Samples
  2. Build the samples.

    $ make TARGET_ARCH=aarch64 TARGET_OS=linux SMS=87
Note: This will build the samples for the Orin GPU (SM=87). To build for other GPUs, replace the SMS value with the target compute version.    

How to Run the CUDA Samples

From the host:

$ rcp -r /usr/local/cuda-11.4/samples/bin/aarch64/linux/release/<username>@<target ip address>:/home/nvidia/cuda_samples

From the target:

$ cd cuda_samples
$ ./deviceQueryDrv // you can launch any sample app


The /usr/src/tensorrt folder is created in the Build Docker.

How to Build the TensorRT Samples

On the Development host:

$ cd /usr/src/tensorrt/samples
$ make TARGET=aarch64

How to Run the TensorRT Samples on the Target

Copy the files to the target to make the TensorRT sample apps available to run.

From the host:

$ rcp -r /usr/src/tensorrt <username>@<target ip address>:/home/nvidia/tensorrt

From the target:

$ cd ~/tensorrt/bin
$ ./sample_googlenet

For further information the TensorRT Developer guide and API reference documents are available at, including sample cross-compile information at