Supporting the sample_hello_world Application

Application developers often deal with nested dependencies. Sometimes these dependencies might have requirements that are impossible to identify even from reading a vendor’s developer guide. For example, a dependency two or more layers down requires the ability to write to a location on the host system. The dependency assumes that the location exists on the host, but it actually does not. At which point, the developer encounters an error.

Note:
  • This topic is intended to show ways to identify dependencies, debug common issues, and support your application in Docker on the target. This is not intended as an official document to support all available applications and binaries in the file system. Solutions provided for the use cases in this topic might not be applicable to other applications and binaries and might warrant further investigation and debugging by the developer.
  • Vendors might sometimes provide support for Docker with their applications and libraries. Find out from your vendors if such support exists for your use case on the NVIDIA DRIVE® OS Linux Guest VM.

The sample_hello_world application will be used to showcase ways to investigate some of these common issues, as well as identify and resolve requirements to support running your applications on Docker within the NVIDIA DRIVE OS Linux Guest VM.

This topic assumes that the DriveWorks sample is available on the target file system and has been compiled. If not, see the DriveWorks SDK Reference Documentation on how to add DriveWorks and the sample to the target file system.

Identifying Dependencies and Requirements

To identify dependencies and requirements, use the ldd tool.

With ldd, you will get a list of the shared object dependencies that your compiled application will require. Run ldd against the sample_hello_world application and view the output.
$ ldd sample_hello_world

Edit the drivers.csv file to include the shared library paths from the ldd output that are not present in drivers.csv. Be aware that some paths might be symlinks. Identify what the symlinks resolve to and also include those paths in the drivers.csv file.

Note: For symlinks, include the entry as sym, <path> rather than lib, <path>.

After adding the required libraries and symlinks, the drivers.csv file should look similar to the following:

dir, /usr/lib/firmware/tegra23x
                
lib, /usr/lib/libcuda.so.1
lib, /usr/lib/libnvrm_gpu.so
lib, /usr/lib/libnvrm_mem.so
lib, /usr/lib/libnvrm_sync.so
lib, /usr/lib/libnvrm_host1x.so
lib, /usr/lib/libnvos.so
lib, /usr/lib/libnvsocsys.so
lib, /usr/lib/libnvtegrahv.so
lib, /usr/lib/libnvsciipc.so
lib, /usr/lib/libnvrm_chip.so
lib, /usr/lib/libnvcucompat.so              
                
lib, /lib/aarch64-linux-gnu/libEGL_nvidia.so.0
sym, /usr/lib/libcuda.so
lib, /usr/lib/libcuda.so.1
sym, /usr/lib/libnvscibuf.so
lib, /usr/lib/libnvscibuf.so.1
lib, /usr/lib/libnvscicommon.so.1
lib, /usr/lib/libnvsciipc.so
lib, /lib/aarch64-linux-gnu/libudev.so.1
lib, /lib/aarch64-linux-gnu/libusb-1.0.so.0
lib, /lib/aarch64-linux-gnu/librt.so.1
lib, /lib/aarch64-linux-gnu/libX11.so.6
lib, /lib/aarch64-linux-gnu/libXrandr.so.2
lib, /lib/aarch64-linux-gnu/libXinerama.so.1
lib, /lib/aarch64-linux-gnu/libXi.so.6
lib, /lib/aarch64-linux-gnu/libXcursor.so.1
lib, /usr/lib/libdrm.so.2
lib, /lib/aarch64-linux-gnu/libdl.so.2
lib, /lib/aarch64-linux-gnu/libpthread.so.0
lib, /lib/aarch64-linux-gnu/libXext.so.6
lib, /lib/aarch64-linux-gnu/libXxf86vm.so.1
lib, /lib/aarch64-linux-gnu/libGLESv2_nvidia.so.2
lib, /lib/aarch64-linux-gnu/libstdc++.so.6
lib, /lib/aarch64-linux-gnu/libm.so.6
lib, /lib/aarch64-linux-gnu/libgcc_s.so.1
lib, /lib/aarch64-linux-gnu/libc.so.6
lib, /usr/lib/libgnat-23.20220512.so
lib, /usr/lib/libnvrm_host1x.so
lib, /usr/lib/libnvdla_runtime.so
lib, /usr/lib/libnvidia-glsi.so.535.00
lib, /usr/lib/libnvrm_chip.so
lib, /usr/lib/libnvrm_surface.so
lib, /usr/lib/libnvrm_sync.so
lib, /usr/lib/libnvos.so
lib, /usr/lib/libnvrm_gpu.so
lib, /usr/lib/libnvrm_mem.so
lib, /usr/lib/libNvFsiCom.so
lib, /usr/lib/libnvmedia_iep_sci.so
lib, /usr/lib/libnvmedia2d.so
lib, /usr/lib/libnvmedialdc.so
lib, /usr/lib/libnvmedia_ijpe_sci.so
lib, /usr/lib/libnvmedia_ide_parser.so
lib, /usr/lib/libnvmedia_ide_sci.so
lib, /usr/lib/aarch64-linux-gnu/libz.so.1
lib, /usr/lib/libnvmedia_tensor.so
lib, /usr/lib/libnvmedia_dla.so
lib, /usr/lib/libnvscistream.so.1
lib, /usr/lib/aarch64-linux-gnu/libgomp.so.1
lib, /usr/lib/libnvsipl.so
lib, /usr/lib/libnvsipl_devblk.so
lib, /usr/lib/libnvsipl_query.so
lib, /usr/lib/libnvparser.so
lib, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8
lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8
lib, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8
lib, /usr/lib/aarch64-linux-gnu/libcudnn.so.8
lib, /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8
lib, /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8
lib, /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8
lib, /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8
lib, /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8
lib, /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8
lib, /lib/aarch64-linux-gnu/libxcb.so.1
lib, /lib/aarch64-linux-gnu/libXrender.so.1
lib, /lib/aarch64-linux-gnu/libXfixes.so.3
lib, /usr/lib/libnvpvaintf.so
sym, /lib/aarch64-linux-gnu/libGL.so
sym, /lib/aarch64-linux-gnu/libGL.so.1
lib, /lib/aarch64-linux-gnu/libGL.so.1.7.0
lib, /lib/aarch64-linux-gnu/libGLX.so.0
lib, /lib/aarch64-linux-gnu/libGLdispatch.so.0
lib, /lib/aarch64-linux-gnu/libGLU.so.1
lib, /usr/lib/libnvsocsys.so
lib, /usr/lib/libnvidia-rmapi-tegra.so.535.00
lib, /usr/lib/libnvrm_interop_gpu.so
lib, /usr/lib/libnvtegrahv.so
lib, /usr/lib/libnvivc.so
lib, /usr/lib/libnvscievent.so
lib, /usr/lib/libnvvideo.so
lib, /usr/lib/libnvvic.so
lib, /usr/lib/libnvmedia_eglstream.so
lib, /usr/lib/libnvfusacap.so
lib, /usr/lib/libnvsipl_control.so
lib, /usr/lib/libnvsipl_devblk_cdi.so
lib, /usr/lib/libnvsipl_devblk_ddi.so
lib, /usr/lib/libnvsipl_devblk_crypto.so
lib, /usr/lib/libnvdla_compiler.so
lib, /lib/aarch64-linux-gnu/libXau.so.6
lib, /lib/aarch64-linux-gnu/libXdmcp.so.6
lib, /usr/lib/libnvpvaumd.so
lib, /usr/lib/libnvrm_stream.so
lib, /usr/lib/libnvisppg.so
lib, /usr/lib/libnvpkcs11.so
lib, /lib/aarch64-linux-gnu/libbsd.so.0
lib, /usr/lib/libnvisp.so
lib, /usr/lib/libteec.so
lib, /usr/lib/libnvvse.so
sym, /usr/lib/libnvscisync.so
lib, /usr/lib/libnvscisync.so.1
lib, /usr/lib/libnvcudla.so
lib, /usr/lib/libnvidia-eglcore.so.535.00
lib, /usr/lib/libnvdc.so
lib, /usr/lib/libnvimp.so
lib, /usr/lib/libnvddk_2d_v2.so
lib, /usr/lib/libnvddk_vic.so

If your resulting drivers.csv file content does not look like the preceding sample output, copy this and replace the entirety of the drivers.csv file with this content.

You are now ready to run the sample_hello_world application in Docker. If you have not already, change directory to the location of the sample application and run the following command.
$ sudo docker run --rm --runtime nvidia --gpus all -v /usr/local/driveworks-5.12:/usr/local/driveworks -v /usr/local/cuda-11.4:/usr/local/cuda -v $(pwd):$(pwd) -w $(pwd) --security-opt systempaths=unconfined ubuntu:20.04 ./sample_hello_world

Some specific explanations about the preceding Docker command might be warranted.

  • Using ldd will list a number of libraries from the /usr/local/cuda and /usr/local/driveworks paths. However, these paths are symlinks, and they will not resolve to the actual directory paths /usr/local/cuda-11.4 and /usr/local/driveworks-5.12. The CSV will mount them appropriately, but the sample binary will complain that it cannot find them because the symlink that resolves them is not in the image. To address this, you must explicitly mount the CUDA and DW paths from the host to what they would have been with symlinks.

    -v /usr/local/cuda-11.4:/usr/local/cuda 
    -v /usr/local/driveworks-5.12:/usr/local/driveworks

    This addresses a number of the dependencies within those two directories and allows you to remove them from the CSV.

  • The sample requires some access to the host /proc paths. This is identified by running strace against the application to examine the files it tries to open. However, these paths cannot be mounted with the CSV as Docker dynamically allocates a namespace separate from the host.

    Normally, the paths can be included in Docker by specifying the --privileged argument, but that is too permissive. Rather, it is recommended to use the --securit-opt systempaths=unconfined argument. It enables access to /proc, but it is not as permissive as --privileged. However, depending on the use case of your application, you might require more permissions to access resources on the host. In that case, --privileged will do fine.

You have successfully identified dependencies and supported another application to run via Docker on the NVIDIA DRIVE OS Linux Guest VM. Continue to follow the template already present in those files while making necessary changes to support your other applications.