DriveWorks SDK Reference
5.18.10 Release
For Test and Development only

TensorRT Optimizer Tool

Description

The NVIDIA® DriveWorks TensorRT Optimizer Tool enables optimization for a given model using TensorRT.

For specific examples, please refer to the following:

Prerequisites

This tool is available on X86, NVIDIA DRIVE OS Linux & QNX

This tool creates output files that are placed into the current working directory by default. Please ensure the following for your convenience:

  • Write permissions are enabled for the current working directory.
  • Include the tools folder in the binary search path of the system.
  • Execute from your home directory.

Running the Tool

The TensorRT Optimization tool accepts the following parameters. Several of these parameters are required based on model type.
For more information, please refer to the Examples.

Run the tool by executing:

./tensorRT_optimization --modelType=[onnx]
                        --onxxFile=[path to file]
                        [--iterations=[int]]
                        [--half2=[int]]
                        [--out=[path to file]]
                        [--int8]
                        [--calib=[calibration file name]]
                        [--cudaDevice=[CUDA GPU index]]
                        [--useDLA]
                        [--pluginConfig=[path to plugin config file]]
                        [--precisionConfig=[path to precision config file]]
                        [--testFile=[path to binary file]]
                        [--useGraph=[int]]
                        [--workspaceSize=[int]]

Parameters

--modelType=[onnx]
        Description: The type of model to be converted to the TensorRT network. Onnx only.

--onxxFile=[path to file]
        Description: Path to an ONNX file.
        Example: --onnxFile=~/myNetwork.onnx

--iterations=[int]
        Description: Number of iterations to run to measure speed.
                     This parameter is optional.
        Example: --iterations=100
        Default value: 10

--half2=[int]
        Description: The network running in paired fp16 mode. Requires platform to support native fp16.
                     This parameter is optional.
        Example: --half2=1
        Default value: 0

--out=[path to file]
        Description: Name of the optimized model file.
                     This parameter is optional.
        Example: --out=model.bin
        Default value: optimized.bin

--int8
        Description: If specified, run in INT8 mode.
                     This parameter is optional.

--calib=[calibration file name]
        Description: INT8 calibration file name.
                     This parameter is optional.
        Example: --calib=calib.cache

--cudaDevice=[CUDA GPU index]
        Description: Index of a CUDA capable GPU device.
                     This parameter is optional.
        Example: --cudaDevice=1
        Default value: 0

--verbose = [int]
        Description: Enable tensorRT verbose logging.
                     This parameter is optional
        Default value: 0

--useDLA
        Description: If specified, this generates a model to be executed on DLA. This argument is only valid on platforms with DLA hardware.
                     This parameter is optional.

--pluginConfig=[path to plugin config file]
        Description: Path to plugin configuration file. See template_plugin.json for an example.
                     This parameter is optional.
        Example: --pluginConfig=template_plugin.json

--precisionConfig=[path to precision config file]
        Description: Path to a precision configuration file for generating models with mixed
                     precision. For layers not included in the configuration file, builder mode determines the precision. For these layers, TensorRT may choose any precision for better performance. If 'output_types' is not provided for a layer, the data type of the output tensors will be set to the precision of the layer. For the layers with precision set to INT8, scaling factors of the input/output tensors should be provided. This file can also be used to set the scaling factors for each tensor by name. The values provided in this file will override the scaling factors specified in calibration file (if provided). See 'template_precision.json' for an example.
                     This parameter is optional.
        Example: --precisionConfig=template_precision.json

--testFile=[path to binary file]
        Description: Name of a binary file for model input/output validation. This file should contain
                     flattened pairs of inputs and expected outputs in the same order as the TensorRT model expects. The file is assumed to hold 32 bit floats. The number of test pairs is automatically detected.
                     This parameter is optional.
        Example: Data with two inputs and two outputs would have a layout in the file as follows:
                 > \[input 1\]\[input 2\]\[output 1\]\[output 2\]\[input 1\]\[input 2\]\[output 1\]\[output 2\]...

--useGraph
        Description: If specified, executes the optimized network by CUDA graph. It helps check if the optimized network
                     works with CUDA graph acceleration.
                     This parameter is optional.

--workspaceSize=[int]
        Description: Max workspace size in megabytes. Limits the maximum size that any layer in the network
                     can use. If insufficient scratch is provided, TensorRT may not be able to find an implementation for a given layer.
                     This parameter is optional.

Examples

Optimizing ONNX Models

./tensorRT_optimization --modelType=onxx
                        --onnxFile=~/myNetwork.onnx
Note
The --inputBlobs, --inputDims, and --outBlobs parameters are ignored if you select the ONNX model type.
All the input and output blobs will be automatically marked as input or output, respectively.