DriveWorks SDK Reference
5.10.90 Release
For Test and Development only

Getting Started Using the NVIDIA DRIVE OS NGC Docker Container

Please see the "NVIDIA DRIVE OS NGC Docker Container Installation Guide" to download and install Docker, for complete host system requirements, for access to NVIDIA® GPU Cloud (NGC™), to pull the NVIDIA DRIVE® OS build/flash Docker image from NGC, flash the NVIDIA DRIVE Orin™ system, setup minicom and networking for secure shell (SSH) and network file system (NFS), and for other important installation information.

Note
You may to refer to the "DRIVE OS 6.0 Installation Guide for NVIDIA Developer" instead of the "NVIDIA DRIVE OS NGC Docker Container Installation Guide" if you obtained your documentation from the NVIDIA DRIVE Developer Program instead of NVONLINE.

Core Binaries Installation

We assume that you have pulled from NGC the NVIDIA DRIVE OS Docker image named drive-agx-orin-linux-aarch64-sdk-build-x86_64-linux-gnu or drive-agx-orin-qnx-aarch64-sdk-build-x86_64-linux-gnu that may be used to build and flash the NVIDIA DRIVE OS including the NVIDIA DriveWorks SDK and its sample applications. The NVIDIA DriveWorks SDK is precompiled and preinstalled for both the Linux x86 and Linux or QNX aarch64 architectures under the path /usr/local/driveworks inside the NVIDIA DRIVE OS Docker container. The guest Docker container may also be used to flash the NVIDIA DRIVE Orin system following the procedure in the "NVIDIA DRIVE OS NGC Docker Container Installation Guide," in which case the NVIDIA DriveWorks SDK is precompiled and preinstalled for the Linux or QNX aarch64 architecture on the target system.

Guest Docker Container on Host System x86

Samples Binaries Installation

This section describes running precompiled samples with no compilation required. As an alternative, to compile samples from source and run those samples, please see Samples Compilation From Source below.

The NVIDIA DriveWorks SDK is precompiled and preinstalled for the Linux x86 architecture inside the NVIDIA DRIVE OS Docker container under the path /usr/local/driveworks. The NVIDIA DriveWorks SDK samples are precompiled for the Linux x86 architecture and preinstalled under /usr/local/driveworks/bin.

To run samples, first start the guest Docker container with the following command on the host system:

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --gpus=all --privileged \
--net=host \
nvcr.io/drive/driveos-sdk/drive-agx-orin-linux-aarch64-sdk-build-x86:latest

or

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --gpus=all --privileged \
--net=host \
nvcr.io/drive/driveos-sdk/drive-agx-orin-qnx-aarch64-sdk-build-x86:latest
Note
Components of the Docker image name and the image tag may vary slightly from the above. Please use the name and tag provided to you by NVIDIA.
Warning
Enter all subsequent commands in this section at the guest Docker container command prompt # unless stated otherwise.

To run the "Hello World" sample, use the following command:

# /usr/local/driveworks/bin/sample_hello_world

Partial console output:

*************************************************
Welcome to Driveworks SDK
....
Happy autonomous driving!

Other samples from within the path /usr/local/driveworks/bin may be run inside the guest Docker container in a similar way. For a full list of samples, please see Samples.

Warning
Enter all subsequent commands in this section at the host system command prompt %.

To run samples that require access to a display, you may need to use the following command to allow access to the display for the docker group:

% xhost +local:docker

Console output:

non-network local connections being added to access control list

For security reasons, after exiting guest Docker container, you should remove access to the display for the docker group using the following command:

% xhost -local:docker

Console output:

non-network local connections being removed from access control list

Samples Compilation From Source

This section describes compiling samples from source and running those samples. As an alternative, to install and run precompiled samples with no compilation required, please see Samples Binaries Installation above.

The NVIDIA DriveWorks SDK is precompiled and preinstalled for the Linux x86 architecture on the NVIDIA DRIVE OS Docker image under the path /usr/local/driveworks to enable compiling samples and applications that leverage the NVIDIA DriveWorks SDK. Source code and CMake project and toolchain files for the NVIDIA DriveWorks SDK samples are located under the path /usr/local/driveworks/samples.

To compile the samples, first start the guest Docker container with the following command on the host system:

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --gpus=all --privileged \
--net=host \
nvcr.io/drive/driveos-sdk/drive-agx-orin-linux-aarch64-sdk-build-x86:latest

or

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --gpus=all --privileged \
--net=host \
nvcr.io/drive/driveos-sdk/drive-agx-orin-qnx-aarch64-sdk-build-x86:latest
Note
Components of the Docker image name and the image tag may vary slightly from the above. Please use the name and tag provided to you by NVIDIA.
Warning
Enter all subsequent commands in this section at the guest Docker container command prompt # unless stated otherwise.

Create the output directory and configure the project:

# mkdir -p /home/nvidia/build-x86_64-linux-gnu
# cmake -B /home/nvidia/build-x86_64-linux-gnu -S /usr/local/driveworks/samples

Console output:

-- The C compiler identification is GNU *
-- The CXX compiler identification is GNU *
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA *
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Found PkgConfig: /usr/bin/pkg-config (found version "*")
-- Found EGL: /usr/lib/x86_64-linux-gnu/libEGL.so
-- Found /usr/lib/x86_64-linux-gnu/libEGL.so:
-- - Includes: [/usr/include]
-- - Libraries: [/usr/lib/x86_64-linux-gnu/libEGL.so]
-- DW_EXPERIMENTAL_FORCE_EGL not set, EGL Support Disabled
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- The TensorRT version is TRT *
-- Building GLFW for X11 (static)
-- Found X11: /usr/include
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- Looking for IceConnectionNumber in ICE
-- Looking for IceConnectionNumber in ICE - found
-- **** Samples will be installed to `/home/nvidia/build-x86_64-linux-gnu/install/usr/local/driveworks/samples/bin' on the host filesystem. ****
-- Found CUDART: /usr/local/cuda/targets/x86_64-linux/include
-- Found cuBLAS: /usr/local/cuda/targets/x86_64-linux/include
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_base library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_base.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_calibration library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_calibration.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_egomotion library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_egomotion.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_imageprocessing library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_imageprocessing.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_pointcloudprocessing library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_pointcloudprocessing.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_sensors library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_sensors.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_vehicleio library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_vehicleio.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dw_dnn_base library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdw_dnn_base.so
-- Found 'dwvisualization/core/Visualization.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found driveworks_visualization library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdriveworks_visualization.so
-- Found 'dwvisualization/core/Visualization.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dwshared library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdwshared.so
-- Found 'dw/core/DynamicMemory.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dwdynamicmemory library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdwdynamicmemory.so
-- Found cuDNN: /usr/include (found version "*")
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dwcgf library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdwcgf.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/x86_64-Linux/include
-- Found dwframework_dwnodes library in /usr/local/driveworks/targets/x86_64-Linux/lib/libdwframework_dwnodes.so
-- Found NvSCI: /usr/include found components: NvSciSync
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/build-x86_64-linux-gnu

Build the project:

# cd /home/nvidia/build-x86_64-linux-gnu
# make

Partial console output:

Scanning dependencies of target sample_hello_world
Building CXX object src/hello_world/CMakeFiles/sample_hello_world.dir/main.cpp.o
Linking CXX executable sample_hello_world
Built target sample_hello_world

Install the project:

# make install

Partial console output:

Install the project...
-- Install configuration: "Release"
....
-- Installing: /home/nvidia/build-x86_64-linux-gnu/install/usr/local/driveworks/samples/bin/sample_hello_world
-- Set runtime path of "/home/nvidia/build-x86_64-linux-gnu/install/usr/local/driveworks/samples/bin/sample_hello_world" to ""

To run the "Hello World" sample, use the following command inside the guest Docker container:

# /home/nvidia/build-x86_64-linux-gnu/install/usr/local/driveworks/samples/bin/sample_hello_world

Partial console output:

*************************************************
Welcome to Driveworks SDK
....
Happy autonomous driving!

Other samples from within the path /home/nvidia/build-x86_64-linux-gnu/install/usr/local/driveworks/samples/bin may be run inside the guest Docker container in a similar way. For a full list of samples, please see Samples.

Warning
Enter all subsequent commands in this section at the host system command prompt %.

To run samples that require access to a display, you may need to use the following command to allow access to the display for the docker group:

% xhost +local:docker

Console output:

non-network local connections being added to access control list

For security reasons, after exiting guest Docker container, you should remove access to the display for the docker group using the following command:

% xhost -local:docker

Console output:

non-network local connections being removed from access control list

Verification of the File System Layout

For instructions on verifying the file system layout on the guest NVIDIA DRIVE OS Docker container on the host system, please see Verifying the NVIDIA DriveWorks SDK Installation.

NVIDIA DRIVE Orin Target System Linux aarch64

NFS Server Configuration

It is convenient to configure an NFS server on the host system to make precompiled or cross-compiled samples binaries and data accessible to the target system.

Install the nfs-kernel-server package on the host system:

% sudo apt-get install -y --no-install-recommends nfs-kernel-server

Partial console output:

The following additional packages will be installed:
keyutils libasn1-8-heimdal libcap2 libdevmapper1.02.1 libevent-2.1-7
libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal
libheimntlm0-heimdal libhx509-5-heimdal libk5crypto3 libkeyutils1
libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 libldap-common
libnfsidmap2 libroken18-heimdal libsasl2-2 libsasl2-modules-db libsqlite3-0
libssl1.1 libtirpc-common libtirpc3 libwind0-heimdal libwrap0 netbase
nfs-common rpcbind ucf
....
Selecting previously unselected package nfs-kernel-server.
Preparing to unpack .../nfs-kernel-server_*_amd64.deb ...
Unpacking nfs-kernel-server ...
Setting up nfs-kernel-server ...
Creating config file /etc/exports with new version
Creating config file /etc/default/nfs-kernel-server with new version
Processing triggers for libc-bin ...

Create and export a folder and restart the NFS server:

% sudo mkdir -p /srv/nfs/driveworks-aarch64-linux-gnu
% echo '/srv/nfs/driveworks-aarch64-linux-gnu *(async,rw,no_root_squash,no_all_squash,no_subtree_check)' \
| sudo tee -a /etc/exports >/dev/null
% sudo exportfs -a
% sudo service nfs-kernel-server restart
Warning
Enter all subsequent commands in this section at the target system command prompt $.

Connect to the target system using minicom, tcu_muxer, or SSH, and then mount the folder exported from the host system to the target system:

$ sudo mkdir -p /srv/nfs/driveworks-aarch64-linux-gnu
$ sudo mount -t nfs $REMOTE_HOST:/srv/nfs/driveworks-aarch64-linux-gnu \
/srv/nfs/driveworks-aarch64-linux-gnu

The environment variable $REMOTE_HOST denotes the hostname or IP address of the host system that has exported the folder. The environment variable $REMOTE_HOST must be set before running the above command.

Samples Binaries Installation

This section describes installing and running precompiled samples with no cross-compilation required. As an alternative, to cross-compile samples from source and run those samples, please see Samples Cross-Compilation From Source below.

We assume that you have flashed your NVIDIA DRIVE Orin target system using the NVIDIA DRIVE OS Docker container and finalized your target system setup following the procedures in the "NVIDIA DRIVE OS NGC Docker Container Installation Guide." The NVIDIA DriveWorks SDK will be precompiled for the Linux aarch64 architecture and preinstalled on the target system under the path /usr/local/driveworks.

To install the NVIDIA DriveWorks SDK samples precompiled for the Linux aarch64 architecture onto the target system, first copy the following Debian packages located under the path /drive/extra/driveworks inside the guest Docker container to the folder /srv/nfs/driveworks-aarch64-linux-gnu on the host system exported by the NFS server:

  • driveworks-data_*_all.deb
  • driveworks-samples_*~linux*_arm64.deb
  • driveworks-stm-samples_*~linux*_arm64.deb
  • driveworks-cgf-samples_*~linux*_arm64.deb

First start the guest Docker container with the following command on the host system:

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --gpus=all \
--name=drive-agx-orin-linux-aarch64-sdk-build-x86 --privileged --net=host \
--rm \
nvcr.io/drive/driveos-sdk/drive-agx-orin-linux-aarch64-sdk-build-x86:latest
Note
Components of the Docker image name and the image tag may vary slightly from the above. Please use the name and tag provided to you by NVIDIA.

Copy the Debian packages from the guest Docker container to the folder on the host system exported by the NFS server:

% sudo docker cp \
drive-agx-orin-linux-aarch64-sdk-build-x86:/drive/extra/driveworks/. \
/srv/nfs/driveworks-aarch64-linux-gnu

If you do not have an NFS mount setup, you may use scp to copy the above Debian packages after connecting to the target system using minicom, tcu_muxer, or SSH. See "How to Run the CUDA Samples" in the "NVIDIA DRIVE OS NGC Docker Container Installation Guide" for an example of copying a file from the host system to the target system.

Warning
Enter all subsequent commands in this section at the target system command prompt $.

Connect to the target system using minicom, tcu_muxer, or SSH, and then install the driveworks-data, driveworks-samples, driveworks-stm-samples, and driveworks-cgf-samples packages:

$ sudo apt-get install -y --no-install-recommends \
/srv/nfs/driveworks-aarch64-linux-gnu/driveworks-data_*_all.deb \
/srv/nfs/driveworks-aarch64-linux-gnu/driveworks-samples_*~linux*_arm64.deb \
/srv/nfs/driveworks-aarch64-linux-gnu/driveworks-stm-samples_*~linux*_arm64.deb \
/srv/nfs/driveworks-aarch64-linux-gnu/driveworks-cgf-samples_*~linux*_arm64.deb

Console output:

Suggested Packages:
driveworks-cgf-doc driveworks-doc driveworks-stm-doc
The following NEW packages will be installed:
driveworks-cgf-samples driveworks-data driveworks-samples
driveworks-stm-samples
0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Selecting previously unselected package driveworks-data.
Preparing to unpack .../driveworks-data_*_all.deb ...
Unpacking driveworks-data ...
Selecting previously unselected package driveworks-samples.
Preparing to unpack .../driveworks-samples_*~linux*_arm64.deb ...
Unpacking driveworks-samples ...
Selecting previously unselected package driveworks-cgf-samples.
Preparing to unpack .../driveworks-cgf-samples_*~linux*_arm64.deb ...
Unpacking driveworks-cgf-samples ...
Selecting previously unselected package driveworks-stm-samples.
Preparing to unpack .../driveworks-stm-samples_*~linux*_arm64.deb ...
Unpacking driveworks-stm-samples ...
Setting up driveworks-data ...
Setting up driveworks-samples ...
Setting up driveworks-stm-samples ...
Setting up driveworks-cgf-samples ...

The NVIDIA DriveWorks SDK samples precompiled for the Linux aarch64 architecture are then installed under /usr/local/driveworks/bin on the target system.

Warning
Enter all subsequent commands in this section at the target system command prompt $.

Connect to the target system using minicom, tcu_muxer, or SSH, and then to run the "Hello World" sample, use the following command on the target system:

$ /usr/local/driveworks/bin/sample_hello_world

Partial console output:

*************************************************
Welcome to Driveworks SDK
....
Happy autonomous driving!

Other samples from within the path /usr/local/driveworks/bin may be run on the target system in a similar way. For a full list of samples, please see Samples.

Samples Cross-Compilation From Source

This section describes cross-compiling samples from source and running those samples. As an alternative, to install and run precompiled samples with no cross-compilation required, please see Samples Binaries Installation above.

The NVIDIA DriveWorks SDK is precompiled and preinstalled for the Linux aarch64 architecture on the NVIDIA DRIVE OS Docker image under the path /usr/local/driveworks/targets/aarch64-Linux to enable cross-compiling samples and applications that leverage the NVIDIA DriveWorks SDK. Source code and CMake project and toolchain files for the NVIDIA DriveWorks SDK samples are located under the path /usr/local/driveworks/samples.

To cross-compile the samples, first start the guest Docker container with the following command on the host system:

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v $DRIVEWORKS_WORKSPACE:/home/nvidia --gpus=all --privileged --net=host \
nvcr.io/drive/driveos-sdk/drive-agx-orin-linux-aarch64-sdk-build-x86:latest
Note
Components of the Docker image name and the image tag may vary slightly from the above. Please use the name and tag provided to you by NVIDIA.

The environment variable $DRIVEWORKS_WORKSPACE denotes the location on the host file system below which you would like the cross-compiled sample binaries to be placed and is mapped to the path /home/nvidia inside the guest Docker container. This environment variable must be set before running the above command.

Warning
Enter all subsequent commands in this section at the guest Docker container command prompt #.

Create the output directory and configure the project:

# mkdir -p /home/nvidia/build-aarch64-linux-gnu
# cmake -B /home/nvidia/build-aarch64-linux-gnu \
-DCMAKE_TOOLCHAIN_FILE=/usr/local/driveworks/samples/cmake/Toolchain-V5L.cmake \
-DVIBRANTE_PDK=/drive/drive-linux -S /usr/local/driveworks/samples

Console output:

-- VIBRANTE_PDK = /drive/drive-linux
-- VIBRANTE_PDK_BRANCH = *
-- Vibrante version *
-- The C compiler identification is GNU *
-- The CXX compiler identification is GNU *
-- Check for working C compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-gcc
-- Check for working C compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-g++
-- Check for working CXX compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-g++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA *
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Found PkgConfig: /usr/bin/pkg-config (found version "*")
-- Found EGL: /drive/drive-linux/lib-target/libEGL.so
-- Found /drive/drive-linux/lib-target/libEGL.so:
-- - Includes: [/drive/drive-linux/include]
-- - Libraries: [/drive/drive-linux/lib-target/libEGL.so]
-- Found: /drive/drive-linux/lib-target/libdrm.so
-- Header at: /drive/drive-linux/include
-- DW_EXPERIMENTAL_FORCE_EGL set and EGL Support Enabled
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Cross Compiling for Vibrante
-- The TensorRT version is TRT *
-- Building GLFW for X11 (static)
-- Found X11: /drive/drive-linux/include
-- Looking for XOpenDisplay in /drive/drive-linux/lib-target/libX11.so;/drive/drive-linux/lib-target/libXext.so
-- Looking for XOpenDisplay in /drive/drive-linux/lib-target/libX11.so;/drive/drive-linux/lib-target/libXext.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- Looking for IceConnectionNumber in ICE
-- Looking for IceConnectionNumber in ICE - not found
-- VIBRANTE_PDK_BRANCH = *
-- Found vibrante lib: /usr/local/driveworks/samples/3rdparty/linux-aarch64/vibrante/lib/libudev.so
-- Found vibrante lib: /usr/local/driveworks/samples/3rdparty/linux-aarch64/vibrante/lib/libusb-1.0.so
-- Found vibrante_Xlib: /usr/local/driveworks/samples/3rdparty/linux-aarch64/vibrante_Xlibs/lib/libXcursor.so
-- **** Please copy the contents of `/home/nvidia/build-aarch64-linux-gnu/install/usr/local/driveworks/samples/bin' on the host filesystem to `/usr/local/driveworks/samples/bin' on the target filesystem. ****
-- Found CUDART: /usr/local/cuda/targets/aarch64-linux/include
-- Found cuBLAS: /usr/local/cuda/targets/aarch64-linux/include
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_base library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_base.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_calibration library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_calibration.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_egomotion library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_egomotion.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_imageprocessing library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_imageprocessing.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_pointcloudprocessing library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_pointcloudprocessing.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_sensors library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_sensors.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_vehicleio library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_vehicleio.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dw_dnn_base library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdw_dnn_base.so
-- Found 'dwvisualization/core/Visualization.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found driveworks_visualization library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdriveworks_visualization.so
-- Found 'dwvisualization/core/Visualization.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dwshared library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdwshared.so
-- Found 'dw/core/DynamicMemory.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dwdynamicmemory library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdwdynamicmemory.so
-- Found cuDLA: /usr/local/cuda/targets/aarch64-linux/include
-- Found cuPVA: /usr/local/driveworks/targets/aarch64-Linux/include/cupva
-- Found cuDNN: /usr/include/aarch64-linux-gnu (found version "*")
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dwcgf library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdwcgf.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-Linux/include
-- Found dwframework_dwnodes library in /usr/local/driveworks/targets/aarch64-Linux/lib/libdwframework_dwnodes.so
-- Found NvSCI: /drive/drive-linux/include found components: NvSciSync
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/build-aarch64-linux-gnu

Build the project:

# cd /home/nvidia/build-aarch64-linux-gnu
# make

Partial console output:

Scanning dependencies of target sample_hello_world
Building CXX object src/hello_world/CMakeFiles/sample_hello_world.dir/main.cpp.o
Linking CXX executable sample_hello_world
Built target sample_hello_world

You may ignore warnings about missing library dependencies during linking, since those dependencies will be available on the target system.

Install the project:

# make install

Partial console output:

Install the project...
-- Install configuration: "Release"
....
-- Installing: /home/nvidia/build-aarch64-linux-gnu/install/usr/local/driveworks/samples/bin/sample_hello_world
-- Set runtime path of "/home/nvidia/build-aarch64-linux-gnu/install/usr/local/driveworks/samples/bin/sample_hello_world" to ""

Exit the guest Docker container and copy the folder $DRIVEWORKS_WORKSPACE/build-aarch64-linux-gnu/install/usr/local/driveworks/samples/bin on the host system to the folder /srv/nfs/driveworks-aarch64-linux-gnu/usr/local/driveworks/samples/bin exported by the NFS server:

% sudo mkdir -p \
/srv/nfs/driveworks-aarch64-linux-gnu/usr/local/driveworks/samples/bin
% sudo cp -r \
$DRIVEWORKS_WORKSPACE/build-aarch64-linux-gnu/install/usr/local/driveworks/samples/bin \
/srv/nfs/driveworks-aarch64-linux-gnu/usr/local/driveworks/samples/bin

If you do not have an NFS mount setup you may use scp to copy the contents of $DRIVEWORKS_WORKSPACE/build-aarch64-linux-gnu/install/usr/local/driveworks/samples/bin to /usr/local/driveworks/samples/bin after connecting to the target system using minicom, tcu_muxer, or SSH. See "How to Run the CUDA Samples" in the "NVIDIA DRIVE OS Debian Package Installation Guide" for an example of copying a file from the host system to the target system.

Warning
Enter all subsequent commands in this section at the target system command prompt $.

Connect to the target system using minicom, tcu_muxer, or SSH, and then mount the cross-compiled samples binaries from the host system to the target system:

$ mount -t nfs \
$REMOTE_HOST:/srv/nfs/driveworks-aarch64-linux-gnu/usr/local/driveworks/samples/bin \
/usr/local/driveworks/samples/bin

The environment variable $REMOTE_HOST denotes the hostname or IP address of the host system that has exported the folder containing the cross-compiled samples binaries. The environment variable $REMOTE_HOST must be set before running the above command and the path to the cross-compiled samples binaries on the target system must be exactly /usr/local/driveworks/samples/bin otherwise the samples will not be able to find dependent libraries or data.

Warning
Enter all subsequent commands in this section at the target system command prompt $.

Connect to the target system using minicom, tcu_muxer, or SSH, and then to run the "Hello World" sample, use the following command on the target system:

$ /usr/local/driveworks/samples/bin/sample_hello_world

Partial console output:

*************************************************
Welcome to Driveworks SDK
....
Happy autonomous driving!

Other samples from within the path /usr/local/driveworks/samples/bin may be run on the target system in a similar way. For a full list of samples, please see Samples.

Verification of the File System Layout

For instructions on verifying the file system layout on the NVIDIA DRIVE Orin target system, please see Verifying the NVIDIA DriveWorks SDK Installation.

NVIDIA DRIVE Orin Target System QNX aarch64

NVIDIA DRIVE Orin Target System QNX aarch64

The QNX file system on the NVIDIA DRIVE Orin target system may not be modified, so to run the NVIDIA DriveWorks SDK samples, the precompiled or cross-compiled samples binaries and data must be mounted from host system to target system using an NFS server running on the host system.

Install the nfs-kernel-server package on the host system:

% sudo apt-get install -y --no-install-recommends nfs-kernel-server

Partial console output:

The following additional packages will be installed:
keyutils libasn1-8-heimdal libcap2 libdevmapper1.02.1 libevent-2.1-7
libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal
libheimntlm0-heimdal libhx509-5-heimdal libk5crypto3 libkeyutils1
libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 libldap-common
libnfsidmap2 libroken18-heimdal libsasl2-2 libsasl2-modules-db libsqlite3-0
libssl1.1 libtirpc-common libtirpc3 libwind0-heimdal libwrap0 netbase
nfs-common rpcbind ucf
....
Selecting previously unselected package nfs-kernel-server.
Preparing to unpack .../nfs-kernel-server_*_amd64.deb ...
Unpacking nfs-kernel-server ...
Setting up nfs-kernel-server ...
Creating config file /etc/exports with new version
Creating config file /etc/default/nfs-kernel-server with new version
Processing triggers for libc-bin ...

Create and export a folder and restart the NFS server:

% sudo mkdir -p /srv/nfs/driveworks-aarch64-qnx
% echo '/srv/nfs/driveworks-aarch64-qnx *(async,rw,no_root_squash,no_all_squash,no_subtree_check)' \
| sudo tee -a /etc/exports >/dev/null
% sudo exportfs -a
% sudo service nfs-kernel-server restart

Samples Binaries Installation

This section describes installing and running precompiled samples with no cross-compilation required. As an alternative, to cross-compile samples from source and run those samples, please see Samples Cross-Compilation From Source below.

We assume that you have flashed your NVIDIA DRIVE Orin target system using the NVIDIA DRIVE OS Docker container and finalized your target system setup following the procedures in the "NVIDIA DRIVE OS NGC Docker Container Installation Guide." The NVIDIA DriveWorks SDK will be precompiled for the aarch64 architecture and preinstalled on the target system under the path /usr/local/driveworks.

To install the NVIDIA DriveWorks SDK samples precompiled for the QNX aarch64 architecture onto the target system, first copy and unpack the following Debian package and archives located under the path /drive/extra/driveworks inside the guest Docker container to the folder /srv/nfs/driveworks-aarch64-qnx on the host system exported by the NFS server:

  • driveworks-data_*_all.deb
  • driveworks-samples_*~qnx*_all.tar.gz
  • driveworks-stm-samples_*~qnx*_all.tar.gz
  • driveworks-cgf-samples_*~qnx*_all.tar.gz

First start the guest Docker container with the following command on the host system:

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY --gpus=all \
--name=drive-agx-orin-qnx-aarch64-sdk-build-x86 --privileged --net=host \
--rm \
nvcr.io/drive/driveos-sdk/drive-agx-orin-qnx-aarch64-sdk-build-x86:latest
Note
Components of the Docker image name and the image tag may vary slightly from the above. Please use the name and tag provided to you by NVIDIA.

Copy the Debian package and archives from the guest Docker container to the folder on the host system exported by the NFS server:

% sudo docker cp \
drive-agx-orin-qnx-aarch64-sdk-build-x86:/drive/extra/driveworks/. \
/srv/nfs/driveworks-aarch64-qnx

Install the driveworks-data package on the host system:

$ sudo apt-get install -y --no-install-recommends \
/srv/nfs/driveworks-aarch64-qnx/driveworks-data_*_all.deb
$ sudo ln -s /usr/local/driveworks-* /usr/local/driveworks
$ sudo rm -f /srv/nfs/driveworks-aarch64-qnx/driveworks-data_*_all.deb

Console output:

The following NEW package will be installed:
driveworks-data
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Selecting previously unselected package driveworks-data.
Preparing to unpack .../driveworks-data_*_all.deb ...
Unpacking driveworks-data ...
Setting up driveworks-data ...

Export the samples data folder and restart the NFS server:

% echo '/usr/local/driveworks/data *(async,ro,no_root_squash,no_all_squash,no_subtree_check)' \
| sudo tee -a /etc/exports >/dev/null
% sudo exportfs -a
% sudo service nfs-kernel-server restart

Unpack the following archives to the folder exported by the NFS server:

% sudo tar -xzf \
/srv/nfs/driveworks-aarch64-qnx/driveworks-samples_*~qnx*_all.tar.gz \
-C /srv/nfs/driveworks-aarch64-qnx
% sudo tar -xzf \
/srv/nfs/driveworks-aarch64-qnx/driveworks-stm-samples_*~qnx*_all.tar.gz \
-C /srv/nfs/driveworks-aarch64-qnx
% sudo tar -xzf \
/srv/nfs/driveworks-aarch64-qnx/driveworks-cgf-samples_*~qnx*_all.tar.gz \
-C /srv/nfs/driveworks-aarch64-qnx
% sudo ln -s /srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks-* \
/srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks
% sudo rm -f \
/srv/nfs/driveworks-aarch64-qnx/driveworks-samples_*~qnx*_all.tar.gz \
/srv/nfs/driveworks-aarch64-qnx/driveworks-stm-samples_*~qnx*_all.tar.gz \
/srv/nfs/driveworks-aarch64-qnx/driveworks-cgf-samples_*~qnx*_all.tar.gz
Warning
Enter all subsequent commands in this section at the target system command prompt $.

Connect to the target system using minicom, tcu_muxer, or SSH, and then mount the samples data and precompiled samples binaries from the host system to the target system:

$ fs-nfs3 $REMOTE_HOST:/usr/local/driveworks/data/samples \
/usr/local/driveworks/data/samples
$ fs-nfs3 $REMOTE_HOST:/srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks/bin \
/usr/local/driveworks/samples/bin

The environment variable $REMOTE_HOST denotes the hostname or IP address of the host system that has exported the folder containing the precompiled samples binaries. The environment variable $REMOTE_HOST must be set before running the above command and the path to the samples data and precompiled samples binaries on the target system must be exactly /usr/local/driveworks/data and /usr/local/driveworks/samples/bin respectively otherwise the samples will not be able to find dependent libraries or data.

Due to QNX file system limitations you should not attempt to copy the contents of /usr/local/driveworks/data/samples or /srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks/bin onto the target system instead of using the NFS mount.

The NVIDIA DriveWorks SDK samples precompiled for the QNX arch64 architecture will be available under /usr/local/driveworks/samples/bin on the target system.

To run the "Hello World" sample, use the following command on the target system:

$ /usr/local/driveworks/samples/bin/sample_hello_world

Partial console output:

*************************************************
Welcome to Driveworks SDK
....
Happy autonomous driving!

Other samples from within the path /usr/local/driveworks/samples/bin may be run on the target system in a similar way. For a full list of samples, please see Samples.

Samples Cross-Compilation From Source

This section describes cross-compiling samples from source and running those samples. As an alternative, to install and run precompiled samples with no cross-compilation required, please see Samples Binaries Installation above.

The NVIDIA DriveWorks SDK is precompiled and preinstalled for the QNX aarch64 architecture on the NVIDIA DRIVE OS Docker image under the path /usr/local/driveworks/targets/aarch64-QNX to enable cross-compiling samples and applications that leverage the NVIDIA DriveWorks SDK. Source code and CMake project and toolchain files for the NVIDIA DriveWorks SDK samples are located under the path /usr/local/driveworks/samples.

To cross-compile the samples, first start the guest Docker container with the following command on the host system:

% docker run -it -v /dev/bus/usb:/dev/bus/usb \
-v $DRIVEWORKS_WORKSPACE:/home/nvidia -v $HOME/.qnx:/root/.qnx \
-v $QNX_BASE:/sdp -e QNX_BASE=/sdp -e QNX_HOST=/sdp/host/linux/x86_64 \
-e QNX_TARGET=/sdp/target/qnx7 -e QNX_TOP=/drive/drive-qnx \
-e QNX_VERSION=7.1.0 -e CUDA_PATH=/usr/local/cuda-safe-11.4 \
-e TEGRA_TOP=/drive/drive-foundation \
-e PATH=/sdp/host/linux/x86_64/usr/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
--gpus=all --privileged --net=host \
nvcr.io/drive/driveos-sdk/drive-agx-orin-qnx-aarch64-sdk-build-x86:latest
Note
Components of the Docker image name and the image tag may vary slightly from the above. Please use the name and tag provided to you by NVIDIA.

The environment variable $DRIVEWORKS_WORKSPACE denotes the location on the host file system below which you would like the cross-compiled sample binaries to be placed and is mapped to the path /home/nvidia inside the guest Docker container and the environment variable $QNX_BASE denotes the location on the host file system of the QNX Software Development Platform installation. Both environment variables must be set before running the above command.

Warning
Enter all subsequent commands in this section at the guest Docker container command prompt #.

Create the output directory and configure the project:

# mkdir -p /home/nvidia/build-aarch64-qnx
# cmake -B /home/nvidia/build-aarch64-qnx -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_TOOLCHAIN_FILE=/usr/local/driveworks/samples/cmake/Toolchain-V5Q.cmake \
-DCUDA_DIR=$CUDA_PATH -DVIBRANTE_PDK=$QNX_TOP \
-S /usr/local/driveworks/samples

Console output:

-- VIBRANTE_PDK = /drive/drive-qnx
-- VIBRANTE_PDK_BRANCH = *
-- Vibrante version *
-- Enabling QNX QOS Compiler (qcc/q++)
-- Enabling QNX QOS Compiler (qcc/q++)
-- The C compiler identification is QCC *
-- The CXX compiler identification is QCC *
-- Check for working C compiler: /sdp/host/linux/x86_64/usr/bin/qcc
-- Check for working C compiler: /sdp/host/linux/x86_64/usr/bin/qcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /sdp/host/linux/x86_64/usr/bin/q++
-- Check for working CXX compiler: /sdp/host/linux/x86_64/usr/bin/q++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA *
-- Check for working CUDA compiler: /usr/local/cuda-safe-*/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda-safe-*/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Found PkgConfig: /usr/bin/pkg-config (found version "*")
-- Found EGL: /drive/drive-qnx/lib-target/libEGL.so
-- Found /drive/drive-qnx/lib-target/libEGL.so:
-- - Includes: [/drive/drive-qnx/include]
-- - Libraries: [/drive/drive-qnx/lib-target/libEGL.so]
-- Header at: /drive/drive-qnx/include
-- DW_EXPERIMENTAL_FORCE_EGL set and EGL Support Enabled
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Cross Compiling for Vibrante
-- The TensorRT version is TRT *
-- Building GLFW for Screen
-- VIBRANTE_PDK_BRANCH = *
-- **** Please mount `/home/nvidia/build-aarch64-qnx/install/usr/local/driveworks/samples/bin' on the host filesystem onto `/usr/local/driveworks/samples/bin' on the target filesystem using NFS. ****
-- Found CUDART: /usr/local/cuda-safe-*/targets/aarch64-qnx/include
-- Found cuBLAS: /usr/local/cuda-safe-*/targets/aarch64-qnx/include
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_base library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_base.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_calibration library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_calibration.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_egomotion library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_egomotion.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_imageprocessing library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_imageprocessing.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_pointcloudprocessing library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_pointcloudprocessing.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_sensors library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_sensors.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_vehicleio library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_vehicleio.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dw_dnn_base library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdw_dnn_base.so
-- Found 'dwvisualization/core/Visualization.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found driveworks_visualization library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdriveworks_visualization.so
-- Found 'dwvisualization/core/Visualization.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dwshared library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdwshared.so
-- Found 'dw/core/DynamicMemory.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dwdynamicmemory library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdwdynamicmemory.so
-- Found cuDLA: /usr/local/cuda-safe-*/targets/aarch64-qnx/include
-- Found cuPVA: /usr/local/driveworks/targets/aarch64-QNX/include/cupva
-- Found cuDNN: /usr/include/aarch64-unknown-nto-qnx (found version "*")
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dwcgf library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdwcgf.so
-- Found 'dw/core/Version.h' in /usr/local/driveworks/targets/aarch64-QNX/include
-- Found dwframework_dwnodes library in /usr/local/driveworks/targets/aarch64-QNX/lib/libdwframework_dwnodes.so
-- Found NvSCI: /drive/drive-qnx/include found components: NvSciSync
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/build-aarch64-qnx

Build the project:

# cd /home/nvidia/build-aarch64-qnx
# make

Partial console output:

Scanning dependencies of target sample_hello_world
Building CXX object src/hello_world/CMakeFiles/sample_hello_world.dir/main.cpp.o
Linking CXX executable sample_hello_world
Built target sample_hello_world

You may ignore warnings about missing library dependencies during linking, since those dependencies will be available on the target system.

Install the project:

# make install

Partial console output:

Install the project...
-- Install configuration: "Release"
....
-- Installing: /home/nvidia/build-aarch64-qnx/install/usr/local/driveworks/samples/bin/sample_hello_world
-- Set runtime path of "/home/nvidia/build-aarch64-qnx/install/usr/local/driveworks/samples/bin/sample_hello_world" to ""

Exit the guest Docker container and copy the folder $DRIVEWORKS_WORKSPACE/build-aarch64-qnx/install/usr/local/driveworks/samples/bin on the host system to the folder /srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks/samples/bin exported by the NFS server:

% sudo mkdir -p \
/srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks/samples/bin
% sudo cp -r \
$DRIVEWORKS_WORKSPACE/build-aarch64-qnx/install/usr/local/driveworks/samples/bin \
/srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks/samples/bin
Warning
Enter all subsequent commands in this section at the target system command prompt $.

Connect to the target system using minicom, tcu_muxer, or SSH, and then mount the samples data and cross-compiled samples binaries from the host system to the target system:

$ fs-nfs3 $REMOTE_HOST:/usr/local/driveworks/data/samples \
/usr/local/driveworks/data/samples
$ fs-nfs3 $REMOTE_HOST:/srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks/samples/bin \
/usr/local/driveworks/samples/bin

The environment variable $REMOTE_HOST denotes the hostname or IP address of the host system that has exported the folder containing the cross-compiled samples binaries. The environment variable $REMOTE_HOST must be set before running the above command and the path to the samples data and cross-compiled samples binaries on the target system must be exactly /usr/local/driveworks/data and /usr/local/driveworks/samples/bin otherwise the samples will not be able to find dependent libraries or data.

Due to QNX file system limitations you should not attempt to copy the contents of /usr/local/driveworks/data/samples or /srv/nfs/driveworks-aarch64-qnx/usr/local/driveworks/samples/bin onto the target system instead of using the NFS mount.

To run the "Hello World" sample, use the following command on the target system:

$ /usr/local/driveworks/samples/bin/sample_hello_world

Partial console output:

*************************************************
Welcome to Driveworks SDK
....
Happy autonomous driving!

Other samples from within the path /usr/local/driveworks/samples/bin may be run on the target system in a similar way. For a full list of samples, please see Samples.

Verification of the File System Layout

For instructions on verifying the file system layout on the NVIDIA DRIVE Orin target system, please see Verifying the NVIDIA DriveWorks SDK Installation.