Agentic AI / Generative AI

24/7 Simulation Loops: How Agentic AI Keeps Subsurface Engineering Moving

The subsurface industry is at a critical point in its digital evolution. For decades, unlocking reservoir potential has relied on experts performing essential and time-intensive manual workflows. 

As data complexity grows, the gap between machine speed and human bandwidth has become a primary bottleneck.

On-demand simulation workflows are currently hampered by both manual data overhead and inherent operational latency. The need for engineers to manually aggregate, synthesize, and translate disparate technical materials creates significant knowledge consolidation bottlenecks that stretch project cycles. 

This is further compounded by the asynchronous nature of simulation jobs; when simulations finish or fail during off-hours or while engineers are juggling competing priorities, dead time accumulates. Consequently, what should be a standard 24-hour turnaround often spirals into a multi-day delay, stalling progress across global teams.

In this post, we explain how applying agentic AI on top of the NVIDIA full-stack accelerated computing platform transforms manual, expert-limited workflows into always-on, compute-driven simulation workflows across subsurface engineering and beyond. 

The agentic shift 

Agentic AI transforms this landscape by offloading repetitive technical hurdles, allowing engineers to move beyond good enough results to explore a wider solution space and drive higher asset value. 

In this paradigm, the engineer shifts to a strategic supervisory role—remaining in the loop for high-level direction while agents handle execution. This post demonstrates how to build such a system.

While our examples focus on subsurface simulation, the framework is tool-agnostic and applicable to any industry reliant on complex simulation workflows.

This master architecture shown in Figure 1, below, integrates a central orchestration agent with specialized agents designed for simulator interaction and workflow management. 

The reservoir simulation assistant: Accelerating daily workflows

The reservoir simulation assistant acts as a digital domain expert bridging the gap between the engineer, technical documentation, and the simulator. It serves as a complementary fast-track, working alongside your existing modeling environment to handle repetitive tasks and technical hurdles.

Video 1. Demo of the reservoir simulation assistant

Key takeaways

The reservoir simulation assistant is designed to augment, not replace, the established tools of the trade. By offloading the administrative portion of the simulation loop, engineers can reclaim significant bandwidth:

  • Instant interaction: Whether you prefer navigating through nested menus or executing commands via a terminal, the agent replaces tedious file-hunting with instant results. From launching a run by dragging a simulation deck to the chat, to asking, “What is the skin factor for Well-X?,” the system handles the manual lookups and deck setups in seconds.
  • Rapid Analysis: The agent goes beyond plotting time series curves to provide quick diagnostics. It can instantly answer complex questions, including “Why am I seeing an early water breakthrough at Well-X?” It would normally require hours of manual cross-referencing.
  • Frictionless “what-if” iteration: Execute agile scenario testing without the syntax headaches. The agent handles tedious keyword editing and baseline comparisons, while its self-healing logic proactively fixes convergence issues and input errors, with an optional human-in-the-loop, to keep simulations running 24/7.

Ultimately, this personal agent transforms a multi-step manual administrative process into a single, natural conversation. 

While our demo features a standalone interface, the potential to integrate these agentic capabilities directly into industry-standard, high-fidelity modeling platforms represents an exciting evolution for the subsurface digital ecosystem.

Multi-agent squads: Orchestrating complex engineering studies

While the reservoir simulation assistant enhances daily tasks, including rapid scenario testing and manual lookups, these are often just the preliminary steps for larger, more complex simulation studies such as history matching and field development optimization. These workflows anchor the subsurface decision-making process, yet they are notoriously difficult because they sit at the intersection of two major bottlenecks: operational latency and the expertise gap.

First, these workflows are the primary drivers of dead time. Because a single workflow cycle can take days, results often finish during off-hours and sit idle. This asynchronous gap frequently turns a standard 24-hour run into a multi-day delay.

Second, and more critically, these studies require a “heuristic pause.” After every cycle, an expert must manually synthesize high-dimensional data to decide how to pivot parameters for the next run.  This level of expertise usually demands years of experience or reliance on external consultancies, where such specialized resources are scarce by nature. This heuristic pause creates a cognitive bottleneck, adding significant latency to the project timeline as the workflow waits for expert intervention.

To solve this, we move from a single-agent model to a multi-agent squad. This system mimics a specialized reservoir engineering team, utilizing a group of digital junior engineers to autonomously perform and monitor high-scale optimization jobs. 

By acting as a 24/7 orchestration layer, the squad ensures that as soon as one cycle finishes, the data is synthesized, the next parameters are proposed, and the subsequent run is launched immediately—effectively eliminating the idle dead time between iterations.

Key principles of the agentic workflow:

  • Human-in-the-loop (HITL): Despite the high level of autonomy, engineers maintain total supervisory control. They review and approve agent-proposed plans before launching workflows with hundreds of simulation jobs.
  • Trusted ecosystems: The agents utilize industry-standard simulation and orchestration software via the tool calls. They accelerate delivery not by replacing the physics, but by removing the manual, repetitive tasks that cause bottlenecks.
  • Agnostic and future-proof: While this implementation leverages OPM Flow and in-house Python code for simulation and optimization, respectively, the framework is designed for modularity. The agentic layer is decoupled from the physics engine, allowing it to be seamlessly integrated with industry-standard commercial simulators or proprietary codebases.

Case study: Well placement optimization

To demonstrate this in action, we applied the multi-agent squad to a well-placement optimization for the Brugge benchmark model. The objective was to maximize net present value (NPV) by optimizing the locations of 30 wells. 

  • Collaborative planning: A proposer agent suggests optimization strategies (e.g., genetic algorithms vs. particle swarm optimization with certain sets of hyper parameters), while a critic agent refines them via a debate loop.
  • Dynamic orchestration: Agents adjust tuning parameters in real-time based on performance metrics and domain knowledge.
  • Operational stability: A job manager monitors health to eliminate dead time from unexpected failures.
  • Automated data synthesis: A result analyst translates high-dimensional raw data into actionable insights.

In this specific example, the agents’ discussions, which are rooted in technical manuals and past experiments, have evolved strategically. In early iterations, they prioritized broad exploration, utilizing large populations and high mutation rates to sample the solution space broadly within a strict budget. As the workflow progressed, the thought process shifted toward evolutionary depth. For instance, pivoting from sampling-heavy GA variants to PSO-inspired configurations to test if performance was limited by initial sampling or generational depth.

The building-blocks: NVIDIA Inference Microservices

The intelligence driving these agents is powered by NVIDIA Inference Microservices (NIM), providing the low-latency, production-ready inference required for real-time engineering reasoning.

  • Advanced reasoning: Agents utilize Llama-3.3-Nemotron-Super-49B-v1.5, a state-of-the-art model designed for complex reasoning, planning, and multi-turn agentic workflows.
  • Contextual intelligence: Retrieval-augmented generation (RAG) is enabled by Llama-3.2-NeMo-Retriever-300M-Embed-v2, ensuring agent responses are grounded in proprietary technical documentation and simulation manuals.
  • Modular architecture: The system integrates ChatNVIDIA, the LangChain-compatible interface, enabling seamless orchestration within LangChain and LangGraph frameworks. This provides structured function calling for programmatic interaction with simulator APIs, database queries, and custom tools while maintaining reliable state management across multi-step workflows.
  • Flexible deployment: The architecture supports rapid prototyping using build.nvidia.com API endpoints, then allows single-line configuration changes to hot-swap to self-hosted LLM deployments for secure, on-premises execution with full data sovereignty.

This agentic system shifts engineers’ focus to orchestration from task execution. Time previously spent on manual retrieval and monitoring is redirected toward exploring alternative scenarios and optimizing asset strategies that time constraints would otherwise leave unexamined.

While these results were demonstrated within the reservoir simulation domain, the proposed agentic system is inherently simulation-tool agnostic. Thus the framework is naturally extensible to adjacent frontiers, from geologic CO2 sequestration and geothermal energy, to any industry that relies on complex, iterative simulation workflows. 

The opportunity cost of inaction is now measurable. While traditional workflows wait in queues, agentic systems are already exploring the next iteration.

Getting started

We’re making these capabilities accessible to the community. Access our open-source repository releases on GitHub and try out the end to end multi-agents workflow. You can further customize these agentic workflows for your specific use cases.

Discuss (0)

Tags