Many companies are embracing digital twins to improve their products and services. Digital twins can be used for complex simulations of factories and warehouses or to understand how products will look and behave in the real world. However, many businesses don’t know how to begin making their existing 3D art assets valuable within a simulation environment.
The existing universe of 3D assets is inadequate for the next wave of industries and AI, representing just the visual representation of an object. Designed primarily for visualization, 3D art assets don’t contain metadata required for simulation outside of their digital fidelity. Art assets also come in a variety of file formats, making a researcher’s job of assembling a dataset incredibly time-consuming and difficult to manage.
To help digital twins and virtual worlds for global industries come to life, NVIDIA introduced a new class of 3D assets named NVIDIA Omniverse SimReady. Using SimReady users can create photorealistic 3D art assets. These stunning full-fidelity renderings represent the real world for realistic and accurate simulations.
SimReady assets are more than just 3D objects—they encompass accurate physical properties, behavior, and connected data streams built on Universal Scene Description (USD). They include robust metadata attached to content that enables assets to be inserted into any simulation and behave as they would in the real world. Most importantly, by using SimReady, teams have consistency within their content library.
Asset simulation requirements
In order to run simulations, 3D art assets need to include specific metadata to help the simulator understand how that object should behave. These types of metadata include:
- Semantic labeling
- Synthetic data generation
- Non-visible Sensor support
- Physics and physical behaviors
Semantic labeling
Semantic Labeling defines what an object represents in a simulation. At its most basic, it’s the taxonomy, structure, and ontology between elements within the simulator.
More precisely, semantic labels help identify the various components of a 3D model in a predictable and consistent way to train simulation algorithms. These labels provide a ground-truth understanding that helps the computer understand the simulation environment so it can be trained against dynamic events and predict outcomes that match the real world.
Synthetic data generation
Synthetic data generation is used to create and randomize various scenarios to train simulation models around specific goals, such as computer vision. Randomization is possible with synthetic data across multiple facets such as lighting, material changes, poses, and occlusions.
Sensor support
Sensor support helps the simulator understand how computers in various devices see the world. There are numerous types of sensors that can be integrated into devices to analyze and understand their surroundings. These sensors can be non-visual sensors like LIDAR and radar or contact sensors that can be placed on the periphery of a robot.
Physics and physical behaviors
Physics and physical behaviors help the simulator understand how objects should behave in the real world, based on properties such as mass, center of gravity, and friction of materials. They also determine how these objects behave and interact in a physically-real manner, simulating events such as collisions.

Moving toward a SimReady standard
By default, core simulation metadata is included in SimReady assets and accessible upon import. Semantic labels are the foundation. Knowing how to segment the dataset becomes inefficient or almost impossible without them. Physics with collision meshes are a central component of every art asset. Physical materials are automatically assigned and simulation systems understand the nature of the materials they’re interacting with.
SimReady models aren’t a single “file.” They leverage the modular nature and flexibility of USD, making up a selection of files containing different aspects of an asset. This modularity ensures that content can be non-destructively augmented in the future to make it even more robust.
As the SimReady specification moves towards a standard for 3D workflows, these 3D art assets will accelerate the development of digital twins and virtual worlds for global industries. This includes understanding the optimal design of factory assembly lines, tracking inventories and processes, and training autonomous machines to interact with real-world settings. SimReady will streamline simulations across a wide range of industries.
Recently, I joined the Practical AI podcast to discuss the latest developments to the SimReady standard and how SimReady assets will be used across a wide range of industries.
Practical AI 209: 3D assets & simulation at NVIDIA – Listen on Changelog.com
Getting started with SimReady
SimReady assets can be used for a wide range of use cases like autonomous driving, robotics, and digital twins of warehouses, data centers, and retail stores. These simulations come to life in NVIDIA Omniverse, where you can simulate large-scale worlds that bring new possibilities to industry workflows.
Join NVIDIA GTC 2023
To learn more about using SimReady art assets in practice register free for NVIDIA GTC and add the session How to Build Simulation-Ready USD 3D Assets to your calendar. GTC also features targeted session tracks for developers building metaverse applications, extensions, and microservices.
Resources
Watch a short demo on SimReady with NVIDIA PhysX assets to learn more about using assets in Omniverse.
Subscribe to the Omniverse newsletter to receive updates about Simulation Ready assets. Follow us on Instagram, Twitter, YouTube, and Medium to stay up to date with the next releases and use cases.
Visit the Omniverse Developer Resource Center for additional resources, view the latest tutorials on Omniverse, and check out the forums for support. Join the Omniverse community, Discord server, and Twitch Channel to chat with the community, and subscribe to get the latest Omniverse news.
Hey Everyone,
I’ve been driving the development of our SimReady specifications internally to help users begin to understand how NVIDIA sees 3D content evolving to provide extensive value within Omniverse and the simulation experiences and tools you’re creating on top of the platform.
Having been involved in the 3D industry for a long time now, I want to start a discussion to help enable talented 3D designers so you can provide content that is immediately valuable for simulation, synthetic data generation (SDG) and more. Whether you’re an individual artist, or work for a company producing goods for manufacture, we would love to engage.
And as I learn more about various customer needs, I’m finding SimReady is an endless well of opportunity for creators, and this blog talks about the high-level ways we’re starting to consider the standardization of 3D asset creation (whether it comes from CAD/CAM tools or a DCC package like Blender, 3ds Max or Houdini) and how these 3D assets can be effectively leveraged within the Omniverse ecosystem.
I’ll be hosting a session and Q&A at GTC in March around practical SimReady asset creation and use within our simulation tools like IsaacSim, but would love to hear from you in the interim.
Let me know what you’re up to and what questions you already have around SimReady. We’re just getting started, so there is lots to explore and understand as everyone’s workflows and simulation needs are different.
-=Beau
I think you are exactly right. A SIMREADY 3D Object WebSite would do very well andwill be the new population objects of the Metaverse.
3D objects with PHYS added and ready to be added to a world that will behave as it would in a real world. You are perfectly placed for this type of venture. Its like yoiu unconsciously choose all the right steps to be here at this time.
Get at it.
Sim Ready 3d Objects with varing levels of complexity.
Hi,
I am involved in the 3D industry and simulation twins for many years in industrial machinery equipments.
One of most challenging task is to convert 3d assets coming from CAD/CAM (pro-e /creo/ solidworks …) to 3D meshes , conserving frame orientations between the conversions
How do you achieve this task ?
Defining Collide shapes is a challenging process too.
Then, for some reasons, the final product must target a web application , from what I ve seen USD format is not web ready at the moment. How can it compete with gltf solutions?
Regards,
Stéph