Getting Started Using the NVIDIA DriveOS Docker Container#
Please see the NVIDIA DriveOS Installation Guide for information on the following:
Complete host system requirements
Downloading and installing Docker
Pulling the NVIDIA® DriveOS™ build/flash Docker image from NVIDIA NVONLINE
Flashing the NVIDIA DRIVE® AGX™ Thor Target System
Setting up minicom and networking for secure shell (SSH) and network file system (NFS)
Other important installation information
Please also ensure that the NVIDIA Container Toolkit for the Docker container runtime is installed by following its Installation Guide. Your host operating system should be the latest release of Ubuntu Desktop 24.04 LTS “Noble Numbat” and all packages installed by apt should be updated to their most recent version.
Core Binaries Installation#
Ensure that you have pulled the following NVIDIA DriveOS Docker image from NVIDIA NVONLINE, which is used to build and flash NVIDIA DriveOS, including NVIDIA DriveWorks and its sample applications:
drive-agx-linux-nsr-aarch64-sdk-build-x86
NVIDIA DriveWorks is precompiled and preinstalled for both the Linux x86 and
target architectures under the path /usr/local/driveworks inside the
NVIDIA DriveOS Docker container. The guest Docker container can also be used
to flash the NVIDIA DRIVE AGX Thor Target System following the procedure in
the NVIDIA DriveOS Installation Guide, in which case NVIDIA
DriveWorks is precompiled and preinstalled on the target system.
Guest Docker Container on Host System x86#
Samples Binaries Installation#
Hint
This section describes running precompiled samples with no compilation required. As an alternative, to compile samples from source and run those samples, please see Samples Compilation From Source below.
NVIDIA DriveWorks is precompiled and preinstalled for the Linux x86 architecture inside the NVIDIA DriveOS Docker container under the path
/usr/local/driveworks. The NVIDIA DriveWorks samples are precompiled for the Linux x86 architecture and preinstalled under
/usr/local/driveworks/bin.
To run samples, first start the guest Docker container with the following command on the host system:
% docker run -it -e DISPLAY -e NVIDIA_DRIVER_CAPABILITIES=all \
-v /dev/bus/usb:/dev/bus/usb -v /tmp/.X11-unix:/tmp/.X11-unix --gpus=all \
--net=host --privileged --sysctl fs.mqueue.msg_max=4096 \
--sysctl fs.mqueue.queues_max=512 --ulimit msgqueue=2097152 \
edge.urm.nvidia.com/sw-driveos-linux-docker-local/drive-agx-linux-nsr-aarch64-sdk-build-x86:tag
Note
Components of the Docker image name and the image tag might vary from the above. Please use the name and tag provided to you by NVIDIA.
Warning
Enter all subsequent commands in this section at the guest Docker container command prompt # unless stated otherwise.
To run the “Hello World” sample, use the following command:
# /usr/local/driveworks/bin/sample_hello_world
Partial console output:
*************************************************
Welcome to DriveWorks SDK
....
Happy autonomous driving!
Other samples from within the path /usr/local/driveworks/bin can be run inside the guest Docker container in a similar way. For a full list
of samples, please see Samples.
Warning
Enter all subsequent commands in this section at the host system command prompt %.
To run samples that require access to a display, you might need to use the following command to allow access to the display for the docker
group:
% xhost +local:docker
Console output:
non-network local connections being added to access control list
For security reasons, after exiting the guest Docker container, you should remove access to the display for the docker group by using the following
command:
% xhost -local:docker
Console output:
non-network local connections being removed from access control list
Samples Compilation From Source#
Hint
This section describes compiling samples from source and running those samples. As an alternative, to install and run precompiled samples with no compilation required, please see Samples Binaries Installation above.
NVIDIA DriveWorks is precompiled and preinstalled for the Linux x86 architecture on the NVIDIA DriveOS Docker image under the path
/usr/local/driveworks to enable compiling samples and applications that leverage NVIDIA DriveWorks. Source code and CMake project and
toolchain files for the NVIDIA DriveWorks samples are located under the path /usr/local/driveworks/samples.
To compile the samples, first start the guest Docker container by running the following command on the host system:
% docker run -it -e DISPLAY -e NVIDIA_DRIVER_CAPABILITIES=all \
-v /dev/bus/usb:/dev/bus/usb -v /tmp/.X11-unix:/tmp/.X11-unix --gpus=all \
--net=host --privileged --sysctl fs.mqueue.msg_max=4096 \
--sysctl fs.mqueue.queues_max=512 --ulimit msgqueue=2097152 \
edge.urm.nvidia.com/sw-driveos-linux-docker-local/drive-agx-linux-nsr-aarch64-sdk-build-x86:tag
Note
Components of the Docker image name and the image tag might vary from the above. Please use the name and tag provided to you by NVIDIA.
Warning
Enter all subsequent commands in this section at the guest Docker container command prompt # unless stated otherwise.
Create the output directory and configure the project:
# mkdir -p /home/nvidia/build-linux-host-x86
# cmake -B /home/nvidia/build-linux-host-x86 -S /usr/local/driveworks/samples
Console output:
-- The C compiler identification is GNU *
-- The CXX compiler identification is GNU *
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA *
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Performing Test C_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER
-- Performing Test C_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER - Success
-- Performing Test CXX_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER
-- Performing Test CXX_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER - Success
-- Performing Test C_COMPILER_FLAG_FNO_TREE_VECTORIZE
-- Performing Test C_COMPILER_FLAG_FNO_TREE_VECTORIZE - Success
-- Performing Test CXX_COMPILER_FLAG_FNO_TREE_VECTORIZE
-- Performing Test CXX_COMPILER_FLAG_FNO_TREE_VECTORIZE - Success
-- Performing Test C_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG
-- Performing Test C_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG - Success
-- Performing Test CXX_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG
-- Performing Test CXX_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG - Success
-- Performing Test CXX_COMPILER_FLAG_WERROR_ALL
-- Performing Test CXX_COMPILER_FLAG_WERROR_ALL - Success
-- Building GLFW for X11 (static)
-- Found X11: /usr/include
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so
-- Looking for XOpenDisplay in /usr/lib/x86_64-linux-gnu/libX11.so;/usr/lib/x86_64-linux-gnu/libXext.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- Looking for IceConnectionNumber in ICE
-- Looking for IceConnectionNumber in ICE - found
-- **** Samples will be installed to `/home/nvidia/build-linux-host-x86/install/usr/local/driveworks/samples/bin' on the host filesystem. ****
-- Found CUDART: TRUE
-- Found NvSCI: TRUE
-- Found cuBLAS: /usr/local/cuda/targets/x86_64-linux/include
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/build-linux-host-x86
Build the project:
# cd /home/nvidia/build-linux-host-x86
# make
Partial console output:
Scanning dependencies of target sample_hello_world
Building CXX object src/hello_world/CMakeFiles/sample_hello_world.dir/main.cpp.o
Linking CXX executable sample_hello_world
Built target sample_hello_world
Install the project:
# make install
Partial console output:
Install the project...
-- Install configuration: "Release"
....
-- Installing: /home/nvidia/build-linux-host-x86/install/usr/local/driveworks/samples/bin/sample_hello_world
-- Set runtime path of "/home/nvidia/build-linux-host-x86/install/usr/local/driveworks/samples/bin/sample_hello_world" to ""
To run the “Hello World” sample, use the following command inside the guest Docker container:
# /home/nvidia/build-linux-host-x86/install/usr/local/driveworks/samples/bin/sample_hello_world
Partial console output:
*************************************************
Welcome to DriveWorks SDK
....
Happy autonomous driving!
Other samples from within the path /home/nvidia/build-linux-host-x86/install/usr/local/driveworks/samples/bin can be run inside the guest
Docker container in a similar way. For a full list of samples, please see Samples.
Warning
Enter all subsequent commands in this section at the host system command prompt %.
To run samples that require access to a display, you might need to use the following command to allow access to the display for the docker
group:
% xhost +local:docker
Console output:
non-network local connections being added to access control list
For security reasons, after exiting the guest Docker container, you should remove access to the display for the docker group by running the following
command:
% xhost -local:docker
Console output:
non-network local connections being removed from access control list
Verification of the File System Layout#
For instructions on verifying the file system layout on the guest DriveOS Docker container on the host system, please see Verifying the NVIDIA DriveWorks Installation.
NVIDIA DRIVE AGX Thor Target System Linux aarch64#
Warning
The samples data and precompiled samples binaries are not preinstalled on the target system. You must download them to the host system following the procedure in Downloading and Installing Additional DriveOS Packages in the NVIDIA DriveOS Installation Guide and install them on the target system following the procedures in the sections below.
NFS Server Configuration#
It is convenient to configure an NFS server on the host system to make precompiled or cross-compiled samples binaries and data accessible to the target system.
Install the nfs-kernel-server package on the host system:
% sudo apt --no-install-recommends install nfs-kernel-server
Partial console output:
The following additional packages will be installed:
keyutils libasn1-8-heimdal libcap2 libdevmapper1.02.1 libevent-2.1-7
libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal
libheimntlm0-heimdal libhx509-5-heimdal libk5crypto3 libkeyutils1
libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 libldap-common
libnfsidmap2 libroken18-heimdal libsasl2-2 libsasl2-modules-db libsqlite3-0
libssl1.1 libtirpc-common libtirpc3 libwind0-heimdal libwrap0 netbase
nfs-common rpcbind ucf
....
Selecting previously unselected package nfs-kernel-server.
Preparing to unpack .../nfs-kernel-server_*_amd64.deb ...
Unpacking nfs-kernel-server ...
Setting up nfs-kernel-server ...
Creating config file /etc/exports with new version
Creating config file /etc/default/nfs-kernel-server with new version
Processing triggers for libc-bin ...
Create and export a folder and restart the NFS server:
% sudo mkdir -p /srv/nfs/driveworks-linux-nsr-aarch64
% echo '/srv/nfs/driveworks-linux-nsr-aarch64 *(async,rw,no_root_squash,no_all_squash,no_subtree_check)' \
| sudo tee -a /etc/exports >/dev/null
% sudo exportfs -a
% sudo service nfs-kernel-server restart
Warning
Enter all subsequent commands in this section at the target system command prompt $.
Connect to the target system using minicom, tcu_muxer, or SSH, and then set the environment variable $REMOTE_HOST that describes the
hostname or IP address of the host system that has exported the folder. The environment variable $REMOTE_HOST must be set before running
the below commands.
Mount the folder exported from the host system to the target system:
$ sudo mkdir -p /srv/nfs/driveworks-linux-nsr-aarch64
$ sudo mount -t nfs $REMOTE_HOST:/srv/nfs/driveworks-linux-nsr-aarch64 \
/srv/nfs/driveworks-linux-nsr-aarch64
VNC Server Configuration#
It is convenient to start and connect to a VNC server running on the NVIDIA DRIVE AGX Thor Target System.
Warning
Enter all subsequent commands in this section at the target system command prompt $.
Install the x11vnc package on the target system:
$ sudo apt --no-install-recommends install x11vnc
Note
If your target system does not have an external network connection, you may download the x11vnc package and its dependencies on the
host system using the command sudo apt --download-only install x11vnc:arm64. The packages will be downloaded to the folder
/var/cache/apt/archives with the suffix _arm64.deb. You should copy them to the target system using the NFS server configured above.
To start the VNC server on the target system:
$ sudo service gdm3 stop
$ sudo service gdm stop
$ sudo -b X -ac -noreset
$ export DISPLAY=:0
$ sudo xrandr --fb 1920x1080
$ sudo x11vnc -geometry 1920x1080 -display :0 &
Samples Binaries Installation#
Hint
This section describes installing and running precompiled samples with no cross-compilation required. As an alternative, to cross-compile samples from source and run those samples, please see Samples Cross-Compilation From Source below.
We assume that you have flashed your NVIDIA DRIVE AGX Thor Target System using the NVIDIA DriveOS Docker container and finalized your target
system setup following the procedures in the NVIDIA DriveOS Installation Guide. NVIDIA DriveWorks will be precompiled for the
Linux aarch64 architecture and preinstalled on the target system under the path /usr/local/driveworks.
Download the following archives containing samples data and precompiled samples binaries following the procedure in the NVIDIA DriveOS Installation Guide:
driveworks-data-all-*.tar.gzdriveworks-samples-linux-nsr-aarch64-*.tar.gzdriveworks-stm-samples-linux-nsr-aarch64-*.tar.gz
To install the NVIDIA DriveWorks samples precompiled for the Linux aarch64 architecture onto the target system, first copy the following
archives to the folder /srv/nfs/driveworks-linux-nsr-aarch64 exported by the NFS server.
% sudo cp driveworks-data-all-*.tar.gz \
driveworks-samples-linux-nsr-aarch64-*.tar.gz \
driveworks-stm-samples-linux-nsr-aarch64-*.tar.gz \
/srv/nfs/driveworks-linux-nsr-aarch64
Hint
If you do not have an NFS mount setup, you may use scp to copy the above archives and configuration file after connecting to the target system using minicom, tcu_muxer, or SSH. See “How to Run the CUDA Samples” in the NVIDIA DriveOS Installation Guide for an example of copying a file from the host system to the target system.
Warning
Enter all subsequent commands in this section at the target system command prompt $.
Connect to the target system using minicom, tcu_muxer, or SSH, and then unpack the following archives:
$ sudo tar --keep-directory-symlink --no-overwrite-dir -xzf \
/srv/nfs/driveworks-linux-nsr-aarch64/driveworks-data-all-*.tar.gz -C /
$ sudo tar --keep-directory-symlink --no-overwrite-dir -xzf \
/srv/nfs/driveworks-linux-nsr-aarch64/driveworks-samples-linux-nsr-aarch64-*.tar.gz \
-C /
$ sudo tar --keep-directory-symlink --no-overwrite-dir -xzf \
/srv/nfs/driveworks-linux-nsr-aarch64/driveworks-stm-samples-linux-nsr-aarch64-*.tar.gz \
-C /
Merge the configuration file /usr/local/driveworks/targets/aarch64-Linux/config/nvsciipc.cfg with the file /etc/nvsciipc.cfg on the
target system:
$ sudo mv -f /etc/nvsciipc.cfg /etc/nvsciipc.cfg.old
$ awk '!seen[$0]++' /etc/nvsciipc.cfg.old \
/usr/local/driveworks/targets/aarch64-Linux/config/nvsciipc.cfg \
| sudo tee /etc/nvsciipc.cfg
The NVIDIA DriveWorks samples precompiled for the Linux aarch64 architecture are then installed under /usr/local/driveworks/bin on the
target system.
Warning
Enter all subsequent commands in this section at the target system command prompt $.
Connect to the target system using minicom, tcu_muxer, or SSH, and then to run the “Hello World” sample, use the following command on the target system:
$ /usr/local/driveworks/bin/sample_hello_world
Partial console output:
*************************************************
Welcome to DriveWorks SDK
....
Happy autonomous driving!
Other samples from within the path /usr/local/driveworks/bin may be run on the target system in a similar way. For a full list of
samples, please see Samples.
Samples Cross-Compilation From Source#
Hint
This section describes cross-compiling samples from source and running those samples. As an alternative, to install and run precompiled samples with no cross-compilation required, please see Samples Binaries Installation above.
NVIDIA DriveWorks is precompiled and preinstalled for the Linux aarch64 architecture on the NVIDIA DriveOS Docker image under the path
/usr/local/driveworks/targets/aarch64-Linux to enable cross-compiling samples and applications that leverage NVIDIA DriveWorks. Source code
and CMake project and toolchain files for the NVIDIA DriveWorks samples are located under the path /usr/local/driveworks/samples.
Set the environment variable $DRIVEWORKS_WORKSPACE that describes the location on the host file system below which you would like the
cross-compiled sample binaries to be placed and is mapped to the path /home/nvidia inside the guest Docker container. This environment
variable must be set before running the below command.
To cross-compile the samples, first start the guest Docker container with the following command on the host system:
% docker run -it -e DISPLAY -e NVIDIA_DRIVER_CAPABILITIES=all \
-v /dev/bus/usb:/dev/bus/usb -v /tmp/.X11-unix:/tmp/.X11-unix \
-v $DRIVEWORKS_WORKSPACE:/home/nvidia --gpus=all --net=host --privileged \
--sysctl fs.mqueue.msg_max=4096 --sysctl fs.mqueue.queues_max=512 \
--ulimit msgqueue=2097152 \
edge.urm.nvidia.com/sw-driveos-linux-docker-local/drive-agx-linux-nsr-aarch64-sdk-build-x86:tag
Note
Components of the Docker image name and the image tag might vary from the above. Please use the name and tag provided to you by NVIDIA.
Warning
Enter all subsequent commands in this section at the guest Docker container command prompt #.
Create the output directory and configure the project:
# mkdir -p /home/nvidia/build-linux-nsr-aarch64
# cmake -B /home/nvidia/build-linux-nsr-aarch64 \
-DCMAKE_TOOLCHAIN_FILE=/usr/local/driveworks/samples/cmake/Toolchain-V5L.cmake \
-S /usr/local/driveworks/samples
Console output:
-- VIBRANTE_PDK = /drive/drive-linux
-- Found PDK version *
-- The C compiler identification is GNU *
-- The CXX compiler identification is GNU *
-- Check for working C compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-gcc
-- Check for working C compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-g++
-- Check for working CXX compiler: /drive/toolchains/aarch64--glibc--stable-*/bin/aarch64-linux-g++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA *
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found DRM: /drive/drive-linux/include
-- Found EGL: /drive/drive-linux/include
-- Found GLES: /drive/drive-linux/include
-- Performing Test C_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER
-- Performing Test C_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER - Success
-- Performing Test CXX_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER
-- Performing Test CXX_COMPILER_FLAG_FNO_OMIT_FRAME_POINTER - Success
-- Performing Test C_COMPILER_FLAG_FNO_TREE_VECTORIZE
-- Performing Test C_COMPILER_FLAG_FNO_TREE_VECTORIZE - Success
-- Performing Test CXX_COMPILER_FLAG_FNO_TREE_VECTORIZE
-- Performing Test CXX_COMPILER_FLAG_FNO_TREE_VECTORIZE - Success
-- Performing Test C_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG
-- Performing Test C_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG - Success
-- Performing Test CXX_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG
-- Performing Test CXX_COMPILER_FLAG_FSTACK_PROTECTOR_STRONG - Success
-- Performing Test CXX_COMPILER_FLAG_WERROR_ALL
-- Performing Test CXX_COMPILER_FLAG_WERROR_ALL - Success
-- Building GLFW for X11 (static)
-- Found X11: /drive/drive-linux/include
-- Looking for XOpenDisplay in /drive/drive-linux/lib-target/libX11.so;/drive/drive-linux/lib-target/libXext.so
-- Looking for XOpenDisplay in /drive/drive-linux/lib-target/libX11.so;/drive/drive-linux/lib-target/libXext.so - found
-- Looking for gethostbyname
-- Looking for gethostbyname - found
-- Looking for connect
-- Looking for connect - found
-- Looking for remove
-- Looking for remove - found
-- Looking for shmat
-- Looking for shmat - found
-- **** Please copy the contents of `/home/nvidia/build-linux-nsr-aarch64/install/usr/local/driveworks/samples/bin' on the host filesystem to `/usr/local/driveworks/samples/bin' on the target filesystem. ****
-- Found CUDART: TRUE
-- Found NvSCI: TRUE
-- Found cuBLAS: /usr/local/cuda/targets/aarch64-linux/include
-- Configuring done
-- Generating done
-- Build files have been written to: /home/nvidia/build-linux-nsr-aarch64
Build the project:
# cd /home/nvidia/build-linux-nsr-aarch64
# make
Partial console output:
Scanning dependencies of target sample_hello_world
Building CXX object src/hello_world/CMakeFiles/sample_hello_world.dir/main.cpp.o
Linking CXX executable sample_hello_world
Built target sample_hello_world
You may ignore warnings about missing library dependencies during linking, since those dependencies will be available on the target system.
Install the project:
# make install
Partial console output:
Install the project...
-- Install configuration: "Release"
....
-- Installing: /home/nvidia/build-linux-nsr-aarch64/install/usr/local/driveworks/samples/bin/sample_hello_world
-- Set runtime path of "/home/nvidia/build-linux-nsr-aarch64/install/usr/local/driveworks/samples/bin/sample_hello_world" to ""
Exit the guest Docker container and copy the folder $DRIVEWORKS_WORKSPACE/build-linux-nsr-aarch64/install/usr/local/driveworks/samples/bin
on the host system to the folder /srv/nfs/driveworks-linux-nsr-aarch64/usr/local/driveworks/samples/bin exported by the NFS server:
% sudo mkdir -p \
/srv/nfs/driveworks-linux-nsr-aarch64/usr/local/driveworks/samples/bin
% sudo cp -r \
$DRIVEWORKS_WORKSPACE/build-linux-nsr-aarch64/install/usr/local/driveworks/samples/bin \
/srv/nfs/driveworks-linux-nsr-aarch64/usr/local/driveworks/samples
Hint
If you do not have an NFS mount setup you may use scp to copy the contents of
$DRIVEWORKS_WORKSPACE/build-linux-nsr-aarch64/install/usr/local/driveworks/samples/bin to /usr/local/driveworks/samples/bin after
connecting to the target system using minicom, tcu_muxer, or SSH. See “How to Run the CUDA Samples” in the NVIDIA DriveOS Debian
Package Installation Guide for an example of copying a file from the host system to the target system.
Warning
Enter all subsequent commands in this section at the target system command prompt $.
Connect to the target system using minicom, tcu_muxer, or SSH, and then set the environment variable $REMOTE_HOST that describes the
hostname or IP address of the host system that has exported the folder containing the cross-compiled samples binaries. The environment
variable $REMOTE_HOST must be set before running the below command and the path to the cross-compiled samples binaries on the target system
must be exactly /usr/local/driveworks/samples/bin otherwise the samples will not be able to find dependent libraries or data.
Mount the cross-compiled samples binaries from the host system to the target system:
$ mount -t nfs \
$REMOTE_HOST:/srv/nfs/driveworks-linux-nsr-aarch64/usr/local/driveworks/samples/bin \
/usr/local/driveworks/samples/bin
Warning
Enter all subsequent commands in this section at the target system command prompt $.
Connect to the target system using minicom, tcu_muxer, or SSH, and then to run the “Hello World” sample, use the following command on the target system:
$ /usr/local/driveworks/samples/bin/sample_hello_world
Partial console output:
*************************************************
Welcome to DriveWorks SDK
....
Happy autonomous driving!
Other samples from within the path /usr/local/driveworks/samples/bin may be run on the target system in a similar way. For a full list
of samples, please see Samples.
Verification of the File System Layout#
For instructions on verifying the file system layout on the NVIDIA DRIVE AGX Thor Target System, please see Verifying the NVIDIA DriveWorks Installation.