Installing NVIDIA DriveOS for NVIDIA NVONLINE Users#

To install NVIDIA DriveOS™ 7.0 as an NVIDIA NVONLINE user, you can pull and run Linux Docker images from the Artifactory container registry, then bind and flash the target system.

Images Available for NVIDIA DriveOS 7.0#

Images Available for NVIDIA DriveOS 7.0#

Image Name

Intent

drive-agx-linux-nsr-aarch64-sdk-build-x86

Build and Flash DriveOS 7.0 Linux SDK

Pulling and Running the DriveOS Docker Container Image via Artifactory#

After configuring registry access to Artifactory, you can pull and run the DriveOS Docker container image on your host system. This is useful for confirming access and pre-pulling the image before Preparing to Bind and Flash the Target System.

Tip

Prior to installation, you can choose to remove previously installed DriveOS Docker images and containers to increase space capacity.

Pull and run the DriveOS Docker container image by running the following command on your host system:

$ sudo docker run -it --privileged --net=host -v /dev/:/dev/ \
  -v ${WORKSPACE}:/home/nvidia/ \
  edge.urm.nvidia.com/sw-driveos-linux-docker-local/drive-agx-linux-nsr-aarch64-sdk-build-x86:<version>-<build>

Where:

Preparing to Bind and Flash the Target System#

To prepare to bind and flash NVIDIA DriveOS to the target system from the Docker container, perform the following steps:

  1. Connect the DRIVE AGX to the host system.

    Note

    Refer to the NVIDIA DRIVE AGX Thor Developer Kit Hardware Quick Start Guide.

  2. Ensure that the DRIVE AGX is connected to the host system and that no other processes, such as TCUMuxer or Minicom, are holding a lock on /dev/ttyACM* before starting the Docker container:

    • To check if a process is holding the lock, run the following command:

      $ lsof -w /dev/ttyACM*
      
    • To kill a process that is locking a specific port, run the following command:

      $ kill -9 <pid>
      

      Where <pid> is the corresponding Process ID (PID).

  3. Start the DriveOS Docker container by running the relevant following command:

    $ sudo docker run -it --privileged --net=host -v /drive_flashing:/drive_flashing \
      -v /dev/:/dev/ -v ${WORKSPACE}:/home/nvidia/ \
      edge.urm.nvidia.com/sw-driveos-linux-docker-local/drive-agx-linux-nsr-aarch64-sdk-build-x86:<version>-<build>
    

Binding and Flashing the Target#

After following Preparing to Bind and Flash the Target System, you can bind and flash the target manually.

Binding and Flashing the Target Manually#

You can bind and flash the target manually by performing the following steps:

  1. To bind the target, perform the following steps inside the container:

    1. Change to the following directory:

      # cd $NV_WORKSPACE/drive-foundation
      
    2. Run the relevant following bind command, based on your board variant:

      Important

      For assistance determining your board variant, see DRIVE Platform Supported Boards.

      • For Thor-U boards, run the following command:

        # ./make/bind_partitions \
          -b <board_variant> drive_av.linux \
          -p dev_nsr \
          ENABLE_THOR_U=y
        

        Where:

      • For Thor-X boards, run the following command:

        # ./make/bind_partitions \
          -b <board_variant> drive_av.linux \
          -p dev_nsr
        

        Where -b is set to the appropriate board variant from DRIVE Platform Supported Boards.

      Note

      If the bind command fails due to the following error:

      update-binfmts: exiting due to previous errors
      

      You must manually run the following apt steps on the host system outside of the container to ensure the QEMU package is properly installed:

      $ sudo apt-get remove --purge qemu-user-static
      $ sudo apt-get install qemu-user-static
      
  2. To flash the images, perform the following steps inside the container:

    Note

    The first time flashing the 7.0.3.0 release, add the bootburn --init_persistent_partitions option to update the user metadata structures. If this is done, persistent data will be saved across subsequent flashing of the board.

    1. Change to the following directory:

      # cd $NV_WORKSPACE/drive-foundation/tools/flashtools/bootburn/
      
    2. Run the relevant following flash command, based on your board variant:

      • For Thor-U boards, run the following command:

        # ./bootburn.py \
          -b <board_variant> \
          --board_config /drive/drive-foundation/platform-config/hardware/nvidia/platform/t264/automotive/automotive-platform-configs/p3960/<p3960-1n>/<p3960-1n-sw0x>/board_configs/<p3960-1n-sw0x>_thor_u.json \
          -x /dev/ttyACM2
        

        Where:

        • -b is set to the appropriate board variant from DRIVE Platform Supported Boards.

        • --board_config specifies the file path to specific Thor-U configs, where <p3960-1n> and <p3960-1n-sw0x> are updated based on the appropriate board variant from DRIVE Platform Supported Boards. For example, for the p3960-10-sw03 board variant:

          /drive/drive-foundation/platform-config/hardware/nvidia/platform/t264/automotive/automotive-platform-configs/p3960/p3960-10/p3960-10-sw03/board_configs/p3960-10-sw03_thor_u.json
          
      • For Thor-X boards, run the following command:

        # ./bootburn.py \
          -b <board_variant> \
          -x /dev/ttyACM2
        

        Where:

Optional: Downloading and Installing Additional NVIDIA DriveOS Packages from Artifactory#

The following additional NVIDIA DriveOS packages are hosted on Artifactory and can be downloaded to the /drive_flashing directory inside the container:

Additional Packages

driveos-cuda-repo-ubuntu2404-12-8-local_*_amd64.deb

driveos-cuda-thor-nsr-repo-cross-aarch64-ubuntu2404-12-8-local_*_all.deb

nv-tensorrt-repo-ubuntu2404-cuda12.8-trt*-d7l-cross-ga-*_amd64.deb

nv-tensorrt-repo-ubuntu2404-cuda12.8-trt*-d7l-target-ga-*_arm64.deb

cudnn-local-tegra-repo-ubuntu2404-*_arm64.deb

driveos-cuda-thor-nsr-tegra-repo-ubuntu2404-12-8-local_*arm64.deb

driveos_llm_sdk-*.tar.gz

pva-algos-lib-*-aarch64-linux-*.deb

pva-algos-lib-docs-html-*-algos.zip

NsightSystems-cli-linux-drive-*-nda-arm64-*.deb

NsightSystems-linux-drive-*-nda-*.deb

driveworks-data-all-*.tar.gz

driveworks-samples-linux-nsr-aarch64-*.tar.gz

driveworks-stm-samples-linux-nsr-aarch64-*.tar.gz

To access these additional packages:

  1. Start the container as described in Preparing to Bind and Flash the Target System, which shares /drive_flashing with the container.

  2. From inside the container, export the same credentials used to log into the Artifactory repo from NVIDIA NVONLINE for the Docker container:

    # export ARTIFACTORY_USERNAME=<your_partners_nvidia_com_email>
    # export ARTIFACTORY_REF_TOKEN=<reference_token_copied_from_nvonline>
    # export ARTIFACTORY_URL=https://edge.urm.nvidia.com/artifactory/sw-driveos-linux-docker-local/drive-agx-linux-nsr-aarch64-sdk-build-x86/{VERSION}-{BUILD}
    

    Where:

  3. Optional: To only list the packages without downloading the packages, run the following command:

    # curl -s -u $ARTIFACTORY_USERNAME:$ARTIFACTORY_REF_TOKEN "${ARTIFACTORY_URL}/files/download_files.sh" | bash -s -- --list
    
  4. Download the packages by running the following command:

    # curl -s -u $ARTIFACTORY_USERNAME:$ARTIFACTORY_REF_TOKEN "${ARTIFACTORY_URL}/files/download_files.sh" | bash -s -- --download
    
  5. You can install packages that have a nv-driveos* prefix by using the DRIVEInstaller utility:

    1. Inside the container, change to the following directory:

      # cd driveinstaller
      
    2. Run the DRIVEInstaller utility with the install install type and specify the file path to the .tgz file:

      # ./driveinstaller --installtype install \
        --pkgpath <path/to/file>.tgz
      

      For example:

      # ./driveinstaller --installtype install \
        --pkgpath /drive_flashing/<filename>.tgz
      

Next Steps#

After successfully completing Binding and Flashing the Target, continue to Finalizing the Installation.