Robotics

Introducing NVIDIA Jetson Thor, the Ultimate Platform for Physical AI

Robotics is undergoing a revolution, moving beyond the era of specialist machines to generalist robotics. This shift moves away from single-purpose, fixed-function robots toward adaptable robots trained to perform diverse tasks across varied environments. Inspired by human cognition, these adaptable robots combine fast, reactive responses with high-level reasoning and planning, enabling more efficient learning and adaptation. 

This paradigm opens the door for robots to operate flexibly across industries, reducing costs and expanding practical applications far beyond what specialist robots can achieve. At GTC 2025, the NVIDIA Isaac GR00T platform was introduced as the foundation for this transformation, bringing together robot foundational models, synthetic data pipelines, a simulation environment, and a runtime computer.

Today, we’re excited to announce that the NVIDIA Jetson AGX Thor Developer Kit and NVIDIA Jetson T5000 module are generally available, empowering developers everywhere to build the future of physical AI. With Jetson Thor, robots no longer need to be reprogrammed for each new job. Jetson Thor is the ultimate platform for physical AI, providing a supercomputer for generative reasoning and multimodal, multisensor processing. Jetson Thor can be integrated into next-generation robots to accelerate foundation models, allowing flexibility for challenges like object manipulation, navigation, and following complex instructions. 

What does it take to build a generalist humanoid robot? 

Building a typical humanoid robot requires four essential layers:

  • Hardware abstraction: Integrates all key sensing and actuation modalities, enabling the robot to perceive its environment and interact physically with the world.
  • Real-time control framework: Manages precise, low-latency control of the robot’s movement, where minimizing latency is absolutely critical for safe and responsive operation.
  • Perception and planning: Equips the robot with environmental understanding, grasp and motion planning, locomotion, object recognition, and localization—allowing effective interaction with the surrounding world. Unlike real-time control, this layer allows for slightly more processing time to ensure thoughtful, accurate decisions.
  • High-level reasoning: Powers advanced functions such as scene understanding, complex task planning, and natural language interaction, where longer processing times are acceptable to support deeper reasoning and adaptability.
A humanoid application on the left with the various layers used to build it on the right, including sections labeled high-level reasoning, perception and planning, real-time control framework, and hardware abstraction.
Figure 1. A range of hardware and software components are required to build a humanoid robot

How is NVIDIA Jetson Thor the ultimate platform for physical AI and humanoid robotics? 

The NVIDIA Jetson AGX Thor Developer Kit gives you unmatched performance and scalability. It’s powered by the NVIDIA Blackwell GPU and 128 GB of memory, delivering up to 2070 FP4 teraflops (TFLOPS) of AI compute to effortlessly run the latest generative AI models—all within a 130 W power envelope. Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency. 

Jetson Thor helps you accelerate low-latency, real-time applications with the new Blackwell Multi-Instance GPU (MIG) technology and a robust 14-core Arm Neoverse-V3AE CPU. It also includes a suite of accelerators, including a third-generation Programmable Vision Accelerator (PVA), dual encoders and decoders, an optical flow accelerator, and more. 

For high-speed sensor fusion, the developer kit offers extensive I/O options, including a QSFP slot with 4x25GbE, a wired Multi-GbE RJ45 connector, multiple USB ports, and additional connectivity interfaces. It’s also designed for seamless integration with existing humanoid robot platforms, allowing for easy tethering to jump-start prototyping. 

Transformer Engine and FP4 support

Built on NVIDIA Blackwell architecture, Jetson Thor introduces native FP4 quantization with a next-generation Transformer Engine that dynamically switches between FP4 and FP8 for optimal performance. By combining 4-bit weights, and activations with higher memory bandwidth, Thor accelerates both prefill and decoding in generative AI workloads.

Multi-Instance GPU

Jetson Thor introduces MIG, enabling a single GPU to be partitioned into isolated instances with dedicated resources. This ensures predictable performance by reserving capacity for critical workloads while running less time-sensitive tasks in parallel—especially valuable for mixed-criticality robotics applications.

Tables 1 and 2 show the key features and interfaces of the Jetson T5000 module and the NVIDIA Jetson AGX Thor Developer Kit carrier board. 

NVIDIA Jetson T5000 NVIDIA Jetson T4000*
AI performance2070 TFLOPS (Sparse FP4) 1035 TFLOPS (Dense FP4 | Sparse FP8 | Sparse INT8) 517 TFLOPs (Dense FP8 | Sparse FP16) 1200 TFLOPS (Sparse FP4) 600 TFLOPS (Dense FP4 | Sparse FP8 | Sparse INT8) 300  TFLOPs (Dense FP8 | Sparse FP16) 
GPU2560-core NVIDIA Blackwell architecture GPU with 96 fifth-generation Tensor Core MIG with 10 TPCs1536-core NVIDIA Blackwell architecture GPU with 64 fifth-generation Tensor Core MIG with 6 TPCs
CPU14-core Arm Neoverse-V3AE 64-bit CPU12-core Arm Neoverse-V3AE 64-bit CPU
Memory128 GB 256-bit LPDDR5X, 273 GB/s64 GB 256-bit LPDDR5X, 273 GB/s
Frequency1.57 GHz max GPU
2.6 GHz max CPU
1.57 GHz max GPU
2.6 GHz max CPU
StorageSupports NVMe through PCIe;
Supports SSD through USB3.2
Supports NVMe through PCIe;
Supports SSD through USB3.2
Vision acceleratorPVA v3.0PVA v3.0
Video encodeUp to 6x4Kp60 (H.265/H.264)Up to 6x4Kp60 (H.265/H.264)*
Video decodeUp to 4x 8Kp30 (H.265)
Up to 4x 4Kp60 (H.264)
Up to 4x 8Kp30 (H.265)*
Up to 4x 4Kp60 (H.264)*
CameraUp to 20 cameras through HSB; up to 6 cameras through 16x lanes MIPI CSI-2Up to 32 cameras using Virtual Channels C-PHY 2.1 (10.25 Gbps) D-PHY 2.1 (40 Gbps)Up to 20 cameras through HSB; up to 6 cameras through 16x lanes MIPI CSI-2Up to 32 cameras using Virtual Channels C-PHY 2.1 (10.25 Gbps) D-PHY 2.1 (40 Gbps)
Display4x shared HDMI2.1 VESA Display Port 1.4a – HBR2, MST4x shared HDMI2.1 VESA Display Port 1.4a – HBR2, MST
Power40 W – 130 W40 W – 70 W
Table 1. NVIDIA Jetson AGX Thor Developer Kit module specifications
* Preliminary and subject to change

NVIDIA Jetson AGX Thor Developer Kit specifications
Integrated NVIDIA Jetson Thor moduleNVIDIA Jetson T5000 module
StorageIntegrated 1TB NVMe on M.2 Key M Slot
CameraHSB camera through QSFP slot USB camera
PCIeM.2 Key M slot with x4 PCIe Gen5 (populated with 1TB NVMe) M.2 Key E slot with x1 PCIe Gen5 (Populated with Wi-Fi 6E plus Bluetooth module)
USB2x USB Type-A 3.2 Gen22x USB Type-C 3.1 Gen11x USB Type-C (Debug only)
Networking1x 5GBe RJ45 connector1x QSFP28 (4x 25GbE)
Wi-Fi802.11ax Wi-Fi 6E
Display1x HDMI 2.0b 1x DisplayPort 1.4a
Other I/Os2x 13-pin CAN header 2x 6-pin Automation header 2x 5-pin header JTAG Connector 1x 4-pin Fan Connector – 12V, PWM, and Tach 2x 5-pin Audio Panel Header 2-pin RTC Backup Battery Connector Microfit Power Jack Power, Force Recovery, and Reset buttons
Mechanical243.19 mm x 112.40 mm x 56.88 mm (Height includes feet, carrier board, module, and thermal solution)
Table 2. NVIDIA Jetson AGX Thor Developer Kit carrier board specifications
Block diagram showing the different components in Jetson Thor modules.
Figure 2. Components of NVIDIA Jetson Thor modules

How does Jetson Thor boost generative AI at the edge?

Jetson AGX Thor belongs to a new class of robotic computers, architected from the ground up, to power next-generation humanoid robots. It supports a wide range of generative AI models, from vision language action (VLA) models like NVIDIA Isaac GR00T N1.5 to all popular large language models (LLMs) and vision language models (VLMs)

To deliver a seamless cloud-to-edge experience, Jetson Thor runs the NVIDIA AI software stack for physical AI applications, including NVIDIA Isaac for robotics, NVIDIA Metropolis for visual agentic AI, and NVIDIA Holoscan for sensor processing. You can also build AI agents at the edge using NVIDIA agentic AI workflows like video search and summarization (VSS).

Logos of all the frameworks and models supported on Jetson, including Hugging Face, PyTorch, Meta, Gemini, OpenAI, Qwen, and many more.
Figure 3. Jetson Thor supports a wide variety of AI frameworks and generative AI models

Why are generative reasoning and multimodal sensor processing important for physical AI?

Generative reasoning models are crucial for robotics platforms that can simulate possible sequences of actions, anticipate consequences, reason over language or visual cues, and flexibly generate high-level plans or low-level motion policies. This leads to robotic systems that are significantly more flexible, adaptable, and capable of robust, human-level reasoning in real-world settings.

NVIDIA Jetson Thor delivers a giant leap in generative reasoning by providing speedups of up to 5x compared to Jetson Orin. With FP4 and speculative decoding, developers will be able to achieve an additional 2x performance speedup on Jetson Thor.

Bar chart comparing Jetson Thor to Jetson Orin on generative reasoning benchmarks.
Figure 4. Jetson Thor is up to 5x faster in generative reasoning compared to Jetson Orin

Jetson Thor also seamlessly handles multiple generative AI models and a large number of multimodal sensor inputs, delivering real-time responsiveness. Figure 5 demonstrates this capability using the Qwen2.5-VL-3B VLM and Llama 3.2 3B LLM, handling 16 simultaneous requests. Both models achieve a Time to First Token (TTFT) well below 200 milliseconds and a Time per Output Token (TPOT) well below 50 milliseconds—two key indicators of system responsiveness.

Chart comparing performance of TPOT and TTFT on Orin and Thor.
Figure 5. Jetson Thor delivers real-time responsiveness when handling multiple generative AI models and a large number of multimodal sensor inputs

Jetson Thor leverages Blackwell native FP4 support and also enables advanced techniques like speculative decoding. In this approach, a smaller draft model proposes tokens, and a larger model validates them. This method accelerates generative AI inference while maintaining accuracy, ultimately delivering faster and higher-quality output.

Figure 4 demonstrates that the Qwen2.5-VL-7B model achieves up to 3.5x faster inference on Thor, with FP4 quantization and Eagle-based speculative decoding, compared to Jetson Orin operating with W4A16 (4-bit weights, 16-bit activations).

In addition, Jetson Thor delivers the next level of acceleration and provides up to 5x improvement over Jetson Orin on generative AI models, including LLMs, VLMs, and VLAs, as shown in Table 3.

FamilyModelJetson AGX Thor (output tokens/sec)Jetson AGX Orin (output tokens/sec)Speedup
LLM
LlamaLlama 3.1 8B150.8112.331.34
Llama 3.3 70B12.647.381.71
QwenQwen3-30B-A3B226.4276.692.95
Qwen3-32B79.116.844.70
DeepSeekDeepSeek-R1-Distill-Qwen-7B304.76180.411.69
DeepSeek-R1-Distill-Qwen-32B82.6316.964.87
VLM
QwenQwen2.5-VL-3B356.862161.65
Qwen2.5-VL-7B252154.021.64
LlamaLlama 3.2 11B Vision69.6344.221.57
VLA
GR00TGR00T N146.718.52.52
GR00T N1.541.515.22.74
Table 3. Benchmarks on Jetson Thor and Jetson AGX Orin 

Configuration for these benchmarks: Sequence Length: 2048, Output Sequence Length: 128; Max Concurrency: 8; LLM and VLM models were run with VLLM and VLA models were run with TensorRT; Power Mode: MAXN for both Jetson AGX Thor and Jetson AGX Orin

How does Jetson software accelerate AI at the edge? 

Jetson software accelerates the performance of AI at the edge by delivering a tightly integrated, full-stack software platform optimized for real-time, high-throughput applications across robotics, healthcare, logistics, and autonomous systems. 

Powered by JetPack 7, with Linux kernel 6.8, Ubuntu 24.04 LTS, and the latest NVIDIA AI stack, Jetson software enables low-latency, deterministic execution of advanced generative AI models for physical AI. It combines hardware-accelerated compute, and system-level optimizations to drive responsive, intelligent behavior in complex systems such as humanoid robots, autonomous machines, and industrial automation. 

With features such as integrated Holoscan Sensor Bridge, MIG support, and a Preemptable Realtime Kernel, Jetson software boosts the performance and efficiency of tasks like high speed sensor fusion and motion planning. Supported by the Jetson AI Lab, and a broad ecosystem, Jetson software significantly accelerates time-to-performance for edge AI and robotics applications.

The Jetson Thor platform supports the new Cosmos Reason, an open, customizable, 7B reasoning VLM for physical AI and robotics. 

Diagram showing the different components of the Jetson software stack.
Figure 7. Jetson software stack 

JetPack 7 is architected with SBSA architecture

With JetPack 7, Jetson software aligns with the Server Base System Architecture (SBSA), positioning Jetson Thor alongside industry-standard Arm server design. SBSA standardizes critical hardware and firmware interfaces, delivering stronger OS support, simpler software portability, and smoother enterprise integration. Building on this foundation, Jetson Thor now supports a unified CUDA 13.0 installation across all Arm targets, streamlining development, reducing fragmentation, and ensuring consistency from server-class systems to Jetson Thor.

How does NVIDIA Isaac accelerate robot development end-to-end? 

NVIDIA Isaac is an open robotics platform with CUDA-accelerated libraries, frameworks, and AI models for building AMRs, manipulators, humanoids, and more. Modern robots require an advanced “brain” of control, vision, and language models that process multimodal data in real time for seamless perception-to-action. 

Jetson Thor is purpose-built to run demanding models like Isaac GR00T N1.5, delivering real-time human interaction, spatial awareness, and robust perception. Together, Isaac and Thor enable scalable, edge-deployed multimodal AI, accelerating robotics innovation in industry and research.

Diagram showing different components of Isaac GR00T.
Figure 8. NVIDIA Isaac GR00T accelerates robot development end-to-end

How to use VSS to unlock knowledge from edge cameras? 

The NVIDIA Blueprint for Video Search and Summarization (VSS) from NVIDIA Metropolis gives you the tools to build and deploy video analytics AI agents that can perform contextualized real-time alerts, summarization, and Q&A by analyzing live camera streams. 

VSS is powering visual agent workforces spanning manufacturing for visual inspection and worker safety, live sports such as fan engagement and player analytics, and improved emergency response times for roadway incidents. 

Diagram showing the components of the VSS development platform for building video analytics AI agents.
Figure 9. VSS is used for visual inspection and worker safety in manufacturing and to improve emergency response times in roadway incidents

How is Holoscan on Thor used for real-time sensor processing?

NVIDIA Holoscan is an AI sensor processing platform that delivers an accelerated, full-stack infrastructure required for software-defined and real-time AI. It is built to simplify and scale edge AI on enterprise-grade hardware, providing a high-performance edge solution for real-time AI. 

With Holoscan on Jetson Thor, you can securely partition and isolate concurrent AI workflows—ensuring deterministic performance, fault tolerance, and protection against data leakage for mission-critical applications. This enables continuous AI innovation without compromising safety, making Holoscan the trusted operating layer for real-time action in regulated fields. ​​

Modern robots rely on a diverse array of sensors—including cameras, IMUs, actuators, and much more—each essential for intelligent operation. With the NVIDIA Holoscan Sensor Bridge, you can seamlessly connect all sensors over Ethernet to NVIDIA Jetson platforms, regardless of modality. Using the new Camera over Ethernet technology available in Jetson Thor, sensor data streams directly into GPU memory, dramatically reducing latency and minimizing CPU overhead. This approach replaces traditional driver complexity with streamlined, software-defined APIs, empowering real-time edge AI applications to achieve precise synchronization and robust scalability—whether for robotics, industrial automation, or advanced medical systems. 

A comprehensive ecosystem for accelerating TTM

To help developers bring solutions to market faster, the Jetson ecosystem—with a network of more than 1,000 partners—provides targeted support through a variety of partner categories, ensuring the right capabilities are available at each stage of the process. 

Partner categoryOfferings
Individual Software Vendors (ISVs) Application software 
Cloud Service Providers (CSP)  and software tools Platform OS, device management, AI model customization and porting
Platform software and AI servicesPlatform OS, system software
HW Partners (OEM/ ODM / system builders, design houses, and more)Carrier boards, COTS, customized systems, turnkey design services, connectivity modules
SensorsCameras (MIPI / GMSL, Ethernet, USB), lidar, IMU, audio, ISP tuning
DistributorsJetson modules and developer kits
Table 4. The Jetson ecosystem includes more than 1,000 partners
Image showing the logos of the partners in the NVIDIA robotics ecosystem.
Figure 10.  NVIDIA partners offer flexible engagement models, allowing you to choose specific components and services required for your design 

Get started with NVIDIA Jetson Thor for physical AI

Accelerate your software development by connecting the Jetson AGX Thor Developer Kit directly to an existing robot. You can begin creating and testing applications immediately—no need to wait for complete system integration.

Video 1. Learn how to get started with the NVIDIA Jetson AGX Thor Developer Kit 

Join over 2 million developers and kick-start your next-generation physical AI projects. The Jetson AGX Thor Developer Kit, priced at $3,499, and the Jetson T5000 production modules are available now for purchase through NVIDIA authorized distributors worldwide. 

Get started with Jetson AGX Thor Developer Kit, and download the latest JetPack 7

Comprehensive documentation, support resources, and tools are available through the Jetson Download Center and ecosystem partners. Have questions or need guidance? Connect with experts and other developers in the NVIDIA Developer Forum.

Discuss (0)

Tags