Build and Run Sample Applications for DRIVE OS 6.x Linux

Note: To access the GPU from a Docker Container, please ensure that you have NVIDIA Container Toolkit installed. Installation instructions for NVIDIA Container Toolkit can be found at https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html for your host development environment distribution. An NVIDIA GPU and appropriate CUDA drivers must also be available on the host to run GPU-accelerated applications, including samples.
Note: If you are compiling and running samples in Docker and you want to preserve the compiled samples, please keep in mind that Docker containers are a temporary environment that will essentially be deleted when the container is stopped. To preserve the changes made in a running container, please refer to the Docker official documentation on committing Docker images: https://docs.docker.com/engine/reference/commandline/commit.

Alternatively, if the container was started with a mounted ${WORKSPACE}, you may copy or move the compiled binaries to the mounted ${WORKSPACE} before exiting the container so that the compiled binaries will be available on the host.

Graphics Applications

For the pre-flashed boards and boards flashed with the DRIVE OS Docker container, samples are in opt/nvidia/. To run a basic x11 sample, do the following:

$ cd /opt/nvidia/drive-linux/samples/opengles2/bubble/x11/
$ ./bubble -fps

CUDA

Documentation is available online: CUDA Toolkit Documentation v11.4.1.

CUDA Host x86 and Linux aarch64 is installed on the development host in /usr/local/cuda-11.4/.

All CUDA samples are available on the development host in source code in /usr/local/cuda-11.4/samples.

How to Build the CUDA Samples for the Linux Target

For DRIVE OS releases, only Host cross-compile is supported. Perform the following steps:

  1. On the development host, cd to the samples directory.

    $ cd /usr/local/cuda-11.4/samples
  2. Build the samples.

    $ make TARGET_ARCH=aarch64 TARGET_OS=linux SMS=87
Note: This will build the samples for the Orin GPU (SMS=87). To build for other GPUs, replace the SMS value with the target compute version.    

How to Run the CUDA Samples

From the host:

$ rcp -r /usr/local/cuda-11.4/samples/bin/aarch64/linux/release/<username>@<target ip address>:/home/nvidia/cuda_samples

From the target:

$ cd cuda_samples
$ ./deviceQueryDrv 

TensorRT

The /usr/src/tensorrt folder is created in the Build Docker.

How to Build the TensorRT Samples

On the development host:

$ cd /usr/src/tensorrt/samples
$ sudo make TARGET=aarch64

How to Run the TensorRT Samples on the Target

Copy the files to the target to make the TensorRT sample apps available to run.

From the host:

$ scp -r /usr/src/tensorrt <username>@<target ip address>:/home/nvidia/tensorrt

From the target:

$ cd ~/tensorrt/bin
$ ./sample_googlenet

For further information, the TensorRT Developer guide and API reference documents are available at https://docs.nvidia.com/deeplearning/tensorrt/index.html, including sample cross-compile information at https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#cross-compiling-linux.