JetPack SDK 4.6 Release Page
NVIDIA JetPack SDK is the most comprehensive solution for building end-to-end accelerated AI applications. All Jetson modules and developer kits are supported by JetPack SDK.
JetPack SDK includes the Jetson Linux Driver Package (L4T) with Linux operating system and CUDA-X accelerated libraries and APIs for Deep Learning, Computer Vision, Accelerated Computing and Multimedia. It also includes samples, documentation, and developer tools for both host computer and developer kit, and supports higher level SDKs such as DeepStream for streaming video analytics and Isaac for robotics.
JetPack 4.6
JetPack 4.6 is the latest production release, and supports all Jetson modules including Jetson AGX Xavier Industrial module. JetPack 4.6 includes support for Triton Inference Server, new versions of CUDA, cuDNN and TensorRT, VPI 1.1 with support for new computer vision algorithms and python bindings, L4T 32.6.1 with Over-The-Air update features, security features, and a new flashing tool to flash internal or external media connected to Jetson.
See highlights below for the full list of features added in JetPack 4.6
In addition to the L4T-base container, CUDA runtime and TensorRT runtime containers are now released on NGC for JetPack 4.6.
Installing JetPack
SD Card Image Method
For Jetson Nano Developer Kit:
Follow the steps at Getting Started with Jetson Nano Developer Kit.
For Jetson Nano 2GB Developer Kit:
Follow the steps at Getting Started with Jetson Nano 2GB Developer Kit.
NVIDIA SDK Manager Method
Follow the steps at Install Jetson Software with SDK Manager.
NVIDIA SDK Manager can be installed on Ubuntu 18.04 or Ubuntu 16.04 to flash Jetson with JetPack 4.6
JetPack can also be installed or upgraded using a Debian package management tool on Jetson. We also host Debian packages for JetPack components for installing on host. Refer to the JetPack documentation for instructions.
More Resources
Key Features in JetPack
OS |
NVIDIA L4T provides the bootloader, Linux kernel 4.9, necessary firmwares, NVIDIA drivers, sample filesystem based on Ubuntu 18.04, and more. JetPack 4.6 includes L4T 32.6.1 with these highlights:
1Flashing from NFS is deprecated and replaced by the new flashing tool which uses initrd 2Flashing performance test was done on Jetson Xavier NX production module 3Secure boot enhancement to encrypt kernel, kernel-dtb and initrd was supported on Jetson Xavier NX and Jetson AGX Xavier series in JetPack 4.5. 4Support for encrypting internal media like emmc, was added in JetPack 4.5 |
TensorRT |
TensorRT is a high performance deep learning inference runtime for image classification, segmentation, and object detection neural networks. TensorRT is built on CUDA, NVIDIA’s parallel programming model, and enables you to optimize inference for all deep learning frameworks. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. JetPack 4.6 includes TensorRT 8.0.1 |
cuDNN |
CUDA Deep Neural Network library provides high-performance primitives for deep learning frameworks. It provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. JetPack 4.6 includes cuDNN 8.2.1 |
CUDA |
CUDA Toolkit provides a comprehensive development environment for C and C++ developers building GPU-accelerated applications. The toolkit includes a compiler for NVIDIA GPUs, math libraries, and tools for debugging and optimizing the performance of your applications. JetPack 4.6 includes CUDA 10.2 |
Multimedia API |
The Jetson Multimedia API package provides low level APIs for flexible application development. Camera application API: libargus offers a low-level frame-synchronous API for camera applications, with per frame camera parameter control, multiple (including synchronized) camera support, and EGL stream outputs. RAW output CSI cameras needing ISP can be used with either libargus or GStreamer plugin. In either case, the V4L2 media-controller sensor driver API is used. Sensor driver API: V4L2 API enables video decode, encode, format conversion and scaling functionality. V4L2 for encode opens up many features like bit rate control, quality presets, low latency encode, temporal tradeoff, motion vector maps, and more. JetPack 4.6 includes following highlights in multimedia:
|
Computer Vision |
VPI (Vision Programing Interface) is a software library that provides Computer Vision / Image Processing algorithms implemented on PVA1 (Programmable Vision Accelerator), GPU and CPU OpenCV is a leading open source library for computer vision, image processing and machine learning. VisionWorks2 is a software development package for Computer Vision (CV) and image processing. JetPack 4.6 includes VPI 1.1
JetPack 4.6 includes OpenCV 4.1.1 Jetpack 4.6 includes Visionworks 1.6 1PVA is available only on Jetson AGX Xavier series and Jetson Xavier NX2VisionWorks will no longer be included after JetPack 4.6.x. Developers should instead use the VPI library for computer vision and image processing. Refer to the announcement here. |
Developer Tools |
CUDA Toolkit provides a comprehensive development environment for C and C++ developers building high-performance GPU-accelerated applications with CUDA libraries. The toolkit includes Nsight Eclipse Edition, debugging and profiling tools including Nsight Compute, and a toolchain for cross-compiling applications. NVIDIA Nsight Systems is a low overhead system-wide profiling tool, providing the insights developers need to analyze and optimize software performance. NVIDIA Nsight Graphics is a standalone application for debugging and profiling graphics applications. JetPack 4.6 includes NVIDIA Nsight Systems 2021.2 JetPack 4.6 includes NVIDIA Nsight Graphics 2021.2 Refer to release notes for more details. |
Supported SDKs and Tools |
Deepstream SDK is a complete analytics toolkit for AI-based multi-sensor processing and video and audio understanding. The next version of NVIDIA DeepStream SDK 6.0 will support JetPack 4.6 NVIDIA Triton™ Inference Server simplifies deployment of AI models at scale. Triton Inference Server is open source and supports deployment of trained AI models from NVIDIA TensorRT, TensorFlow and ONNX Runtime on Jetson. On Jetson, Triton Inference Server is provided as a shared library for direct integration with C API. NVIDIA Triton Inference Server Release 21.07 supports JetPack 4.6 PowerEstimator is a webapp that simplifies creation of custom power mode profiles and estimates Jetson module power consumption. PowerEstimator v1.1 supports JetPack 4.6. |
Cloud Native |
Jetson brings Cloud-Native to the edge and enables technologies like containers and container orchestration. NVIDIA JetPack includes NVIDIA Container Runtime with Docker integration, enabling GPU accelerated containerized applications on Jetson platform. NVIDIA hosts several container images for Jetson on Nvidia NGC. Some are suitable for software development with samples and documentation and others are suitable for production software deployment, containing only runtime components. Find more information and a list of all container images at the Cloud-Native on Jetson page. JetPack 4.6 highlights include: New CUDA runtime and TensorRT runtime container images which include CUDA and TensorRT runtime components inside the container itself, as opposed to mounting those components from the host. These containers are built to containerize AI applications for deployment. Note that the L4T-base container continues to support existing containerized applications that expect it to mount CUDA and TensorRT components from the host. |
For a full list of samples and documentation, see the JetPack documentation.
For older versions of JetPack, please visit the JetPack Archive.