Build and Run the Sample Applications for DRIVE OS 6.x QNX

Note: To access the GPU from a Docker Container, please ensure that you have NVIDIA Container Toolkit installed. Installation instructions for NVIDIA Container Toolkit can be found at https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html for your host development environment distribution. An NVIDIA GPU and appropriate CUDA drivers must also be available on the host to run GPU-accelerated applications, including samples.
Note: If you are compiling and running samples in Docker and you want to preserve the compiled samples, please keep in mind that Docker containers are a temporary environment that will essentially be deleted when the container is stopped. To preserve the changes made in a running container, please refer to the Docker official documentation on committing Docker images: https://docs.docker.com/engine/reference/commandline/commit.

Alternatively, if the container was started with a mounted ${WORKSPACE}, you may copy or move the compiled binaries to the mounted ${WORKSPACE} before exiting the container so that the compiled binaries will be available on the host.

Graphics Applications

The QNX screen services should be started only once after the target has booted:

# screen -c /tmp/graphics.conf
# cd /samples/opengles2/screen
# ./bubble

CUDA

Documentation is available online: CUDA Toolkit Documentation v11.4.4.

CUDA Host x86 is installed in /usr/local/cuda/.

CUDA aarch64 QNX is installed in /usr/local/cuda-targets/aarch64-qnx/.

All CUDA samples are available on the development host in source code in /usr/local/cuda/samples.

How to Build the CUDA Samples

Copy /usr/local/cuda/samples to a location where you don’t need root privileges to write (cp -R /usr/local/cuda/samples <some_location>).

cd <some_location>/samples
export CUDA_PATH=/usr/local/cuda-safe-11.4
export QNX_SDK_PATH=<path_to_SDK>/drive-qnx/
make clean
make SMS="87" TARGET_ARCH=aarch64 TARGET_OS=qnx EXTRA_CCFLAGS=" -I$QNX_SDK_PATH/include \"-Wl\,-rpath-link\,$QNX_SDK_PATH/nvidia-bsp/aarch64le/usr/lib:$QNX_SDK_PATH/targetfs/lib:$QNX_SDK_PATH/lib-target:/usr/lib/aarch64-qnx-gnu \"" TARGET_FS=$QNX_SDK_PATH/filesystem/targetfs/ EXTRA_LDFLAGS="-L$QNX_SDK_PATH/lib-target/"
Note: This will build the samples for the Orin GPU (SMS=87). To build for other GPUs, replace the SMS value with the target compute version.    

How to Run the CUDA Samples

You can use one of these 2 options to make the CUDA sample apps available on the target:

  1. Use a NFS share.

    fs-nfs3 <host_ip>:<path_to_Cuda_samples>/  /Cuda
    cd /Cuda/bin/aarch64/qnx/release
    ./deviceQueryDrv  // you can launch any sample app
  2. Copy the files to the target.

    Target:

    # mkqnx6fs /dev/vblk_ufs10
    # mount /dev/vblk_ufs10 /data

    Host: Go to the CUDA sample folder where the apps were built, and run:

    # scp -r bin/ root@<target-IP>:/data/

    Target:

    # cd /data/bin/aarch64/qnx/release
    # ./deviceQueryDrv // you can launch any sample app

TensorRT

A tensorrt folder is created during the SDK installation, located in /usr/src.

The Developer Guide and API Reference documents are available in https://docs.nvidia.com/deeplearning/tensorrt/.

The sample applications are available in /usr/src/tensorrt/samples.

How to Build the TensorRT Samples

export QNX_HOST=/{qnx-toolchain}/host/linux/x86_64/
export QNX_TARGET=/{qnx-toolchain}/target/qnx7/
export SDK_DIR={SDK DIR}  # example: /drive/drive-qnx
export QNX_TOOLCHAIN=/{qnx-toolchain}
export QNX_VERSION=7.1.0
export QNX_GCC_VERSION=8.3.0
export PATH=$QNX_HOST/usr/bin/:$PATH
export CUDA_INSTALL_DIR=/usr/local/cuda-safe-11.4
export TRT_LIB_DIR=/usr/lib/aarch64-unknown-nto-qnx
export CUDNN_INSTALL_DIR=/usr/lib/aarch64-unknown-nto-qnx
export PROTOBUF_INSTALL_DIR=/usr/lib/aarch64-unknown-nto-qnx

cd /usr/src/tensorrt/samples/
make TARGET=qnx USE_QCC=1 ENABLE_DLA=1 SAFETY_SAMPLE_BUILD=1 -j8

How to Run the TensorRT Samples on the Target

fs-nfs3 <host-IP>:<path_to_SDK>/TensorRT/  /TensorRT
cd /usr/src/tensorrt/samples 
./bin/sample_algorithm_selector

For further information, the TensorRT Developer guide and API reference documents are available at https://docs.nvidia.com/deeplearning/tensorrt/index.html.