Turning real-world environments into interactive simulation no longer requires days or weeks of work. With NVIDIA Omniverse NuRec and 3DGUT (3D Gaussian with Unscented Transforms), you can reconstruct photorealistic 3D scenes from simple sensor data and deploy them in NVIDIA Isaac Sim or CARLA Simulator—instantly.
This post walks you through how to capture real-world data, train a reconstruction, and load the results into Isaac Sim.
How to create an interactive simulation from photos
Neural reconstruction enables efficient robot training in realistic simulations, improving sim-to-real transfer. The following steps streamline neural reconstruction and rendering into a recipe that works across different environments.

Step 1: Capture the real-world scene
Capture approximately100 photos from all angles with good lighting and overlap between images to help with feature matching. Example specs: f/8, 1/100s+, 18 mm or similar).
Step 2: Generate sparse reconstruction with COLMAP
To generate a sparse point cloud and camera parameter, use COLMAP, a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline. You can achieve this through its GUI using automatic reconstruction, or by executing commands for feature extraction, feature matching, and sparse reconstruction. For compatibility with 3DGUT, select either the pinhole or simple pinhole camera model.
# Feature detection & extraction
$ colmap feature_extractor \
--database_path ./colmap/database.db \
--image_path ./images/ \
--ImageReader.single_camera 1 \
--ImageReader.camera_model PINHOLE \
--SiftExtraction.max_image_size 2000 \
--SiftExtraction.estimate_affine_shape 1 \
--SiftExtraction.domain_size_pooling 1
# Feature matching
$ colmap exhaustive_matcher \
--database_path ./colmap/database.db \
--SiftMatching.use_gpu 1
# Global SFM
$ colmap mapper \
--database_path ./colmap/database.db \
--image_path ./images/ \
--output_path ./colmap/sparse
# Visualize for verification
$ colmap gui --import_path ./colmap/sparse/0 \
--database_path ./colmap/database.db \
--image_path ./images/
Step 3: Train with 3DGUT for dense reconstruction
Use COLMAP outputs to train with 3DGUT and config apps/colmap_3dgut_mcmc.yaml
.
$ conda activate 3dgrut
$ python train.py --config-name apps/colmap_3dgut_mcmc.yaml \
path=/path/to/colmap/ \
out_dir=/path/to/out/ \
experiment_name=3dgut_mcmc \
export_usdz.enabled=true \
export_usdz.apply_normalizing_transform=true
Step 4: Export to USD and normalize
Once training completes, export your reconstructed scene as a USD file using these essential flags:
export_usdz.enabled=true
export_usdz.apply_normalizing_transform=true
Check out a tutorial for an example script that can be run directly from the Script Editor or as a Standalone Application.
This creates a USD asset that integrates seamlessly with the Isaac Sim simulation ecosystem.
Step 5: Deploy the reconstructed scene
The USD assets generated through this pipeline can be loaded or referenced directly into Isaac Sim, just like any other USD asset. Simply use File > Import or drag-and-drop the USD file into the stage from the content browser.
After loading the USD asset, a ground plane can be created inside of Isaac Sim for mobility simulation, as explained in Video 2.
Reconstructed scenes are also available on the NVIDIA Physical AI Dataset for quick import and immediate experimentation.
How to replay autonomous vehicle scenes in CARLA
For autonomous vehicle (AV) development, Omniverse NuRec libraries integrated with the open source CARLA AV simulator open powerful possibilities. This is an experimental new feature and works with sample scenes that have already been reconstructed and available in the NVIDIA Physical AI Dataset.

Step 1: Run CARLA and set up scripts
Select a scene from the Physical AI Dataset, then navigate to your CARLA directory and run the following script:
./PythonAPI/examples/nvidia/install_nurec.sh
Step 2: Replay the scene
Next, replay the Omniverse NuRec scenario using the following:
source carla/bin/activate
cd PythonAPI/examples/nvidia/
python example_replay_recording.py --usdz-filename /path/to/scenario.usdz
Step 3: Capture data
You can also capture data within the simulation for further testing. Image capture for dataset generation:
source carla/bin/activate
cd PythonAPI/examples/nvidia/
python example_save_images.py --usdz-filename /path/to/scenario.usdz --output-dir ./captured_images
This integration enables you to replay real-world drives in a controllable simulation environment, complete with all the actors and dynamics of the original scene.
How to enhance reconstructed scenes furtherWant to take your reconstructed scenes even further? NVIDIA Cosmos Transfer, a multi-controlnet world foundation model, amplifies robotics and AV simulation by enabling precise, controllable video generation. Use Cosmos Transfer to synthesize diverse environments, lighting conditions, and weather scenarios. You can also dynamically add and edit objects using multimodal controls like segmentation, depth maps, HD maps, and more.

This approach streamlines scenario-rich dataset creation, reduces manual effort, and ensures rigorous, photorealistic validation. With Cosmos Transfer-1 distilled to reduce 70 diffusion steps, you can generate photorealistic controllable video in under 30 seconds. Building on these performance improvements, Cosmos Transfer-2 is coming soon to further accelerate synthetic data generation (SDG) for AV development.
Why Gaussian-based rendering accelerates simulation workflows
3D Gaussians represent a transformative leap in how the real world is reconstructed and simulated for robotics and autonomous vehicles. By streamlining the path from data capture to photorealistic, interactive environments, Omniverse NuRec libraries leverage Gaussian-based rendering to dramatically accelerate simulation workflows for scalable, robust testing.
The combination of the COLMAP proven structure-from-motion pipeline with 3DGUT advanced rendering capabilities creates a robust foundation that handles complex real-world scenarios—from challenging lighting conditions to intricate camera distortions—that would stump traditional reconstruction methods.
Get started rendering real-world scenes in interactive simulation
Whether you’re a researcher aiming to push the boundaries of sim-to-real transfer, or an engineer seeking efficient, high-fidelity scene generation, these advances empower you to rapidly iterate and confidently deploy solutions grounded in real-world complexity.
Ready to get started?
- Download sample reconstructed data from the NVIDIA Physical AI Dataset
- Access the 3DGUT implementation from the nv-tlabs/3dgrut GitHub repo
- Simulate AI-driven robotics solutions in physically based virtual environments with NVIDIA Isaac Sim 5.0
- Learn more about integrating CARLA Simulator with NVIDIA Omniverse NuRec
The future of physical AI simulation is here, and it’s more accessible than ever. Start building richer, more realistic digital twins for tomorrow’s intelligent machines.
Watch the NVIDIA Research special address at SIGGRAPH.
Stay up to date by subscribing to NVIDIA news and following NVIDIA Omniverse on Discord and YouTube.
- Visit our Omniverse developer page to get all the essentials you need to get started
- Access a collection of OpenUSD resources, including the new self-paced Learn OpenUSD training curriculum
- Tune into upcoming OpenUSD Insiders livestreams and connect with the NVIDIA Developer Community
Get started with developer starter kits to quickly develop and enhance your own applications and services.