Physics plays a crucial role in robotic simulation, providing the foundation for accurate virtual representations of robot behavior and interactions within realistic environments. With these simulators, researchers and engineers can train, develop, test, and validate robotic control algorithms and prototype designs in a safe, accelerated, and cost-effective manner.
However, simulation often fails to match reality, a problem known as the sim-to-real gap. Robotics developers need a unified, scalable, and customizable solution to model real-world physics, including support for different types of solvers.
This post walks you through how to train a quadruped robot to move from one point to another and how to set up a multiphysics simulation with an industrial manipulator to fold clothes. This tutorial uses Newton within NVIDIA Isaac Lab.
What is Newton?
Newton is an open source, extensible physics engine being developed by NVIDIA, Google DeepMind, and Disney Research, and managed by the Linux Foundation, to advance robot learning and development.
Built on NVIDIA Warp and OpenUSD, Newton enables robots to learn how to handle complex tasks with greater precision, speed, and extensibility. Newton is compatible with robot learning frameworks such as MuJoCo Playground and Isaac Lab. The Newton Solver API provides an interface for different physics engines, including MuJoCo Warp, to operate on the tensor-based data model, allowing easy integration with training environments in Isaac Lab.

At the core of Newton are the solver modules for numerical integration and constraint solving. Solvers may be constraint- or force-based, use direct or iterative methods, and may use maximal or reduced coordinate representations.
The use of a common interface and shared data model mean that whether you run MuJoCo Warp, the Disney Research Kamino solver, or a custom solver, you interact with Newton consistently. This modular approach also lets you re-use collision handling, inverse kinematics, state management, and time-stepping logic without rewriting application code.
For training, Newton provides a tensor-based API that exposes physics states as PyTorch- and NumPy-compatible arrays, enabling efficient batching and seamless integration with robot learning frameworks such as Isaac Lab. Through the Newton Selection API, training scripts can query joint states, apply actions, and feed results back into learning algorithms—all through a single, consistent interface.
MuJoCo Warp, developed by Google DeepMind, is fully integrated as a Newton solver and also powers MJX and Playground in the DeepMind stack. This enables models and benchmarks to move seamlessly across Newton, Isaac Lab, and MuJoCo environments with minimal friction.Â
Finally, Newton and its associated solvers are released under the Apache 2.0 license, ensuring the community can adopt, extend, and contribute.
What are the highlights of the Newton Beta release?
Highlights of the Newton Beta release include:
- MuJoCo Warp, the main Newton solver, is up to 152x faster for locomotion and 313x for manipulation than MJX on GeForce RTX 4090. The NVIDIA RTX PRO 6000 Blackwell Series adds up to 44% more speed for MuJoCo Warp and 75% for MJX.
- Used as the next-generation Isaac Lab backend, Newton Beta achieves up to 65% faster in-hand dexterous manipulation with MuJoCo Warp versus PhysX.
- Extended performance and stability of Vortex Block Descent (VBD) solver for thin deformables such as clothing as well as implicit Material Point Method (MPM) solver for granular materials.
How to train a locomotion policy for a quadruped using Newton in Isaac Lab
The new Newton physics engine integration in Isaac Lab unlocks a faster, more robust workflow for robotics research and development.
This section showcases an end-to-end example of training a quadruped locomotion policy, validating its performance across simulators, and preparing it for real-world deployment. We’ll use the ANYmal robot as our case study to demonstrate this powerful train, validate, and deploy process.
Step 1: Train a locomotion policy with Newton
The first step is to set up the repository and train a policy from scratch using one of the Reinforcement Learning scripts in Isaac Lab. This example trains the ANYmal-D robot to walk on flat rigid terrain using the rsl_rl
framework. GPU parallelization enables training across thousands of simultaneous environments for rapid policy convergence.
To start training in headless mode for maximum performance, run the following command:
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py
--task Isaac-Velocity-Flat-Anymal-D-v0 --num_envs 4096 --headless
With the Newton Beta release, you can now use the new lightweight Newton Visualizer to monitor training progress without the performance overhead of the full Omniverse GUI. Simply add the --newton_visualizer
flag:
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py
--task Isaac-Velocity-Flat-Anymal-D-v0 --num_envs 4096 --headless
--newton_visualizer
After training, you’ll have a policy checkpoint (.pt file) ready for the next stage.

Step 2: Validate the policy with Sim2Sim transfer
Sim2Sim transfer is a critical sanity check to ensure a policy is not overfit to a single physics engine’s specific characteristics. A policy that can successfully transfer between simulators, like PhysX and Newton, has a much higher chance of working on a physical robot.
A key challenge is that different physics engines may parse a robot’s USD and order its joints differently. We solve this by remapping the policy’s observations and actions using a simple YAML mapping file.
To run a policy trained in Newton with PhysX-based Isaac Lab, use the provided transfer script:
./isaaclab.sh -p scripts/newton_sim2sim/rsl_rl_transfer.py \
--task=Isaac-Velocity-Flat-Anymal-D-v0 \
--num_envs=32 \
--checkpoint <PATH_TO_POLICY_CHECKPOINT> \
--policy_transfer_file
scripts/sim2sim_transfer/config/newton_to_physx_anymal_d.yaml
This transfer script is available through the isaac-sim / IsaacLab GitHub repo.
Step 3: Prepare for Sim2Real deployment
The final step of the workflow is to transfer the policy trained in simulation to a physical robot.
For this example, a policy was trained for the ANYmal-D robot entirely within the standard Isaac Lab environment using the Newton backend. The training process was intentionally limited to using only observations that would be available on the physical robot’s sensors, such as data from the IMU and joint encoders, (that is, no privileged information was used during training).
With the help of NVIDIA partners at ETH Zurich Robotic Systems Lab (RSL), this policy was then deployed directly to their physical ANYmal robot. The resulting hardware test showed the robot successfully executing a walking gait, demonstrating a direct pathway from training in Isaac Lab to testing on a real-world system (Video 2).
This complete train, validate, and deploy process demonstrates how Newton enables the path from simulation to real-world robotics success.
Multiphysics with the Newton standalone engine
Multiphysics simulation captures coupled interactions between rigid bodies (robot hands, for example) and deformable objects (cloth, for example) within a single framework. This enables more realistic evaluation and data-driven optimization of robot design, control, and task performance.
While Newton works with Isaac Lab, developers can use it directly from Python in standalone mode to experiment with complex physical systems.
This walkthrough showcases a key feature of Newton: Simulating mixed systems with different physical properties. We’ll explore an example of a rigid robot arm manipulating a deformable cloth, highlighting how the Newton API enables you to easily combine multiple physics solvers in a single, real-time simulation.
Step 1: Launch the interactive demo
Newton comes with a suite of examples that are easy to run. The Franka robot arm and cloth demo can be launched with a single command from the root of the Newton repository.
First, ensure your environment is set up:
# Set up the uv environment for running Newton examples
uv sync --extra examples
Now, run the cloth manipulation example:
# Launch the Franka arm and cloth demo
uv run -m newton.examples cloth_franka
This opens an interactive viewer where you can watch the GPU-accelerated simulation in real time. The Franka-cloth demo features a GPU-based VBD Cloth solver. It runs at around 30 FPS on an RTX 4090, while guaranteeing penetration-free contact throughout the simulation.
Compared to other GPU-based simulators that also enforce penetration-free dynamics—such as GPU-IPC (GPU-based Incremental Potential Contact solver)—this example achieves over 300x higher performance, making it one of the fastest fully penetration-free cloth manipulation demos currently available.
Step 2: Understanding the multiphysics coupling
This demo is a great example of multiphysics, where systems with different dynamical behaviors interact. This is achieved by assigning a specialized solver to each component. Looking at the example_cloth_franka.py
file, you can see how the solvers are initialized:
# Initialize a Featherstone solver for the robot
self.robot_solver = SolverFeatherstone(self.model, ...)
# Initialize a Vertex-Block Descent (VBD) solver for the cloth
self.cloth_solver = SolverVBD(self.model, ...)
You can easily switch out the robot solver simply by changing SolverFeatherstone
to some other solver that supports rigid body simulation, such as SolverMuJoCo
.
The magic happens in the simulation loop, where these solvers are coordinated. This example uses a one-way coupling, where the rigid body affects the deformable, but not the other way around, which is acceptable in cloth manipulation use case where the effects of cloth on robot dynamics can be neglected. The simulation loop logic is straightforward:
- Update the cloth: The
cloth_solver
simulates the cloth’s movement, reacting to the collisions from the robot. - Update the robot: The
robot_solver
advances the Franka arm’s state. The arm acts as a kinematic object. - Detect collisions: The engine checks for collisions between the newly positioned robot and the cloth particles.
# A simplified view of the simulation loop in example_cloth_franka.py
def simulate(self):
for _step in range(self.sim_substeps):
# 1. Step the robot solver forward
self.robot_solver.step(self.state_0, self.state_1, ...)
# 2. Check for contacts between the robot and the cloth
self.contacts = self.model.collide(self.state_0, ...)
# 3. Step the cloth solver, passing in robot contact information
self.cloth_solver.step(self.state_0, self.state_1, ..., self.contacts, ...)
This explicit, user-controlled loop demonstrates the power of the Newton API, giving researchers fine-grained control over how different physical systems are coupled.
The team plans to extend Newton with deeper, more integrated coupling. This includes exploring two-way coupling, in scenarios where the dynamics effects each system has on the other is considerable—robot locomoting on deformable materials such as soil or mud, for example, where the soil can also exert forces back on rigid bodies in walking scenarios. The team is also envisioning implicit coupling for select solver combinations to more automatically manage the exchange of forces between systems.
How is the ecosystem adopting Newton?
The Newton open ecosystem is rapidly expanding, with leading universities and companies integrating specialized solvers and workflows. From tactile sensing to cloth simulation and from dexterous manipulation to rough terrain locomotion, these collaborations highlight how Newton provides a common foundation for advancing robotic learning and bridging the sim-to-real gap.
The ETH Zurich Robotic Systems Lab (RSL) has been actively leveraging Newton for multiphysics simulation in earthmoving applications, particularly for heavy equipment automation. They use the Newton Implicit Material Point Method (MPM) solver to capture granular interactions such as soil, gravel, and stones colliding with rigid machinery.
In parallel, ETH has applied Warp more broadly in robotics and graphics research, including differentiable simulation for deployable locomotion control, trajectory optimization with Gaussian splats (FOCI), and large-scale 3D garment modeling through the GarmentCodeData dataset.
Lightwheel is actively contributing to Newton through SimReady asset development and solver optimization, particularly on deformables such as soil and cables in multiphysics scenarios. The demonstration below shows the Implicit MPM solver applied across a large environment to model ANYmal quadruped locomotion over non-rigid terrain composed of multiple materials.
Peking University (PKU) is extending Newton into tactile domains by integrating their IPC-based solver, Taccel, to simulate vision-based tactile sensing for robotic manipulators. By leveraging the Newton GPU-accelerated, differentiable architecture, PKU researchers can model fine-grained contact interactions that are critical for tactile and deformable manipulation.
Style3D is bringing its deep expertise in cloth and soft-body simulation to Newton, enabling high-fidelity modeling of garments and deformable objects with complex interactions. A simplified version of the Style3D solver has already been integrated into Newton, with plans to expose APIs that allow advanced users to run full-scale simulations involving millions of vertices.
Technical University of Munich (TUM) is leveraging Newton to run trained dexterous manipulation policies-validated on real robots back in simulation, marking an important first step toward closing the loop between sim and real. Also training of policies with 4,000 parallel environments in MuJoCo Warp already works. The next milestone is to transfer policies to hardware, before extending the framework to fine manipulation using a spatially resolved tactile skin.
Read more on how the TUM AIDX Lab leveraged Warp to accelerate their robotics research on learning tactile in-hand manipulation agents. Learn more about how AIDX Lab is using Newton to advance their robot learning research.
Get started with Newton
The Newton physics engine delivers the simulation fidelity robotics researchers need, with a modular, extensible, and simulator‑agnostic design that makes it straightforward to couple diverse solvers for robot learning.
As an open source, community‑driven project, developers can use, distribute, and extend Newton—adding custom solvers and contributing back to the ecosystem.
- To get started with the standalone Newton Beta, check out the newton-physics/newton GitHub repo.
- To try Newton, explore isaac-sim/IsaacLab on GitHub.
- Visit Newton Developer for additional resources.
Learn more about the research being showcased at CoRL and Humanoids, happening September 27–October 2 in Seoul, Korea.
Also, join the 2025 BEHAVIOR Challenge, a robotics benchmark for testing reasoning, locomotion, and manipulation, featuring 50 household tasks and 10,000 tele-operated demonstrations.
Stay up to date by subscribing to our newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X, and Facebook. Explore NVIDIA documentation and YouTube channels, and join the NVIDIA Developer Robotics forum. To start your robotics journey, enroll in our free NVIDIA Robotics Fundamentals courses today.