Using the right tool and model for a task is a challenging and ever-present engineering problem in agent design. At NVIDIA Research, we’re making fast progress toward automating it away with an approach that trains and uses a separate model, which we call an “orchestrator”, to act as a supervisor over all of the other models and tools.
The orchestrator’s job is to consider the task in the context of user preferences (do they want the result fast, cheap, with the highest level of accuracy possible, or some combination of these?) and then manage other models and call on tools in the task-solving conversation to reach the goal. Crucially, as it turns out, small models are already powerful enough to handle this burden if tuned appropriately.
While it may be surprising to employ large models subordinate to small models, the arrangement plays to their advantages. Small models are unburdened by excessive knowledge and trained to capture the essence of problem-solving due to their limited size.
To build orchestrators, we introduce ToolOrchestra, our flagship method, which involves data preparation, synthetic data generation, multi-objective reinforcement-learning training, and comprehensive evaluation of orchestration methods and models.

Why train an orchestrator?
You might be wondering: “Using an orchestrator is an intriguing concept, but why should I train a model for it? Wouldn’t it be enough to just edit the prompts of my agent to act as an orchestrator?” The short answer is no. The reason ToolOrchestra-trained orchestrators trump other methods lies in the training objectives. During training, the orchestrator generates experimental trajectories. Some solve the problem better than others. Some reach the correct solution cheaply and quickly, while others make extensive use of expensive tools and take a long time to come up with a conclusion. ToolOrchestra’s reinforcement-learning setup explicitly rewards high model problem-solving accuracy, low cost, and short time-to-solution according to the cost preferences for the given problem.
What are the results of using an orchestrator?
To demonstrate the effectiveness of ToolOrchestra, we trained a small model, Orchestrator-8B, to tackle some of the most difficult tasks available, including the problems of the Humanity’s Last Exam, FRAMES, and τ2-Bench.
We then give out-of-the-box monolithic LLMs, prompted orchestrators running on frontier LLMs, and Orchestrator-8B access to the same tools, and measure their performance. The outcome is shown in Table 1. Summarized, Orchestrator-8B outperforms all its competitors regardless of their size or advertised level of capabilities while incurring the smallest cost and problem-solving latency.
| Tools | Model(s) | HLE (↑) | FRAMES (↑) | τ²-Bench (↑) | Cost (↓) | Latency (↓) |
Existing reported SOTA | GPT-5 | 35.2 | – | 84.2‡ | – | – |
| o3 | 24.3 | – | 68.4 | – | – | |
| GPT-4o | 5.3 | – | 43.8 | – | – | |
No tool | Qwen3-8B | 3.2 | 24.2 | –* | 0.2 | 0.6 |
| Llama-Nemotron-49B | 3.6 | 25.6 | –* | 0.4 | 1.1 | |
| Llama-3.3-70B | 3.8 | 32.4 | –* | 0.5 | 1.4 | |
| Qwen3-235B-A22B | 5.2 | 34.3 | –* | 2.6 | 3.3 | |
| Claude Opus 4.1 | 11.7 | 58.2 | –* | 27.4 | 8.2 | |
| GPT-5 | 23.4 | 66.3 | –* | 6.2 | 4.1 | |
Basic tools | Qwen3-8B | 4.7 | 26.5 | 40.7 | 1.3 | 2.2 |
| Llama-Nemotron-49B | 6.8 | 28.2 | 23.2 | 2.5 | 3.5 | |
| Llama-3.3-70B | 4.6 | 42.3 | 17.6 | 2.8 | 4.3 | |
| Qwen3-235B-A22B | 14.0 | 39.5 | 52.9 | 12.3 | 10.2 | |
| Claude Opus 4.1 | 19.8 | 63.5 | 46.0 | 76.2 | 32.5 | |
| GPT-5 | 35.1 | 74.0 | 77.7 | 30.2 | 19.8 | |
Basic tools, specialized LLMs, generalist LLMs | Qwen3-8B | 30.6 | 68.9 | 72.3 | 27.6 | 18.3 |
| Llama-Nemotron-49B | 25.8 | 57.9 | 66.7 | 25.6 | 17.1 | |
| Llama-3.3-70B | 19.7 | 52.4 | 55.8 | 19.7 | 13.4 | |
| Qwen3-235B-A22B | 32.8 | 74.2 | 75.6 | 29.7 | 21.2 | |
| Claude Opus 4.1 | 34.6 | 72.8 | 76.8 | 52.5 | 25.6 | |
| GPT-5 | 21.2 | 57.5 | 62.3 | 17.8 | 13.6 | |
| Orchestrator-8B | 37.1 | 76.3 | 80.2 | 9.2 | 8.2 |
To drive the point of Orchestrator-8B’s efficiency home, we measured the accuracy and cost of leading frontier models and the Orchestrator-8B while restricting the model’s reasoning and acting to 10, 20, 50, and 100 conversational turns. The outcome is visualized in the figure below. We observed that regardless of the conversational length limit imposed on the competing systems, Orchestrator-8B always outperforms its competition while maintaining a lower dollar cost.

How to train an orchestrator?
To train an orchestrator for your own purposes while following the ToolOrchestra method, you’ll need a model, some data, and our training code.
To show how little is needed to build an orchestrator for challenging tasks, such as the hard benchmarks we tested Orchestrator-8B on, we used Qwen3-8B as our underlying model, generated only 552 synthetic problems, and used only 1,296 prompts in training.
Step 1: Choose the underlying model
The choice of the model to train for an effective orchestrator is entirely up to you. We recommend you pick the smallest language model aligned with the nature of your agent. NVIDIA Nemotron Nano, the Qwen 3 family, or the xLAM family are just a few of the options.
Step 2: Prepare and generate data
The good news about the data for ToolOrchestra is that you really don’t need much to get started. The tool assumes that much of the data will be synthetically generated. We describe the data generation process in detail in our paper. In broad terms, you’ll want to start with a description or a few examples of your agent problem-solving with its preferred tools. Using large models, you can then generate many more similar synthetic tasks.
The following is a sketch of the code that can be used to generate samples similar to the ones used to train Orchestrator-8B.
def generate_samples(domain):
subjects = generate_subjects(domain)
schema = generate_schema(subjects)
data_model = generate_datamodel(schema)
database = generated_database(domain,schema,data_model)
tools = generate_tools(domain,database)
tasks = generate_tasks(database,tools)
return tasks
samples = generate_samples()
...
You can jump right in and experience the real data generation magic.
Step 3: Start training
Once equipped with your model choice and some data, you can directly use or adapt ToolOrchestra’s released code to train your own orchestrator. This sketch can get you started (more details can be found in the repository README.)
train_dataset = prepare_data(raw_examples,tools)
train_dataloader = DataLoader(train_dataset)
reward_model = RewardManager(config)
trainer = RayTrainer(config,reward_model)
trainer.init_workers()
trainer.start()
...
You can kick off your own training run and watch your orchestrator come to life!
Step 4: Visualize your progress
ToolOrchestra’s training code supports direct logging through wandb. The following shows example visualizations from Orchestrator-8B’s runs.

The benefits of orchestration
Engineering efficient, high-performance agents today involves a constant struggle to balance capability and cost. Developers must manually weigh every choice (model size, tool use, query length, reasoning depth), knowing that one wrong call can push costs skyward or compromise the quality of the result. This complexity scales unforgivingly as the number of queries that need to be engineered grows, making cost-aware agent optimization one of the most challenging and time-intensive aspects of building real-world AI systems.
ToolOrchestra changes that. By training small orchestrators to direct large models and tools with surgical precision and based on need, we automate this balancing act in a way that outperforms monolithic LLMs and prompted orchestrator setups across accuracy, latency, and dollar cost.
Orchestrator-8B, our example-trained model, is a concrete example demonstrating that the right strategy can beat brute model-size scaling or prompt-engineering dexterity. It delivers state-of-the-art performance on hard benchmarks while using resources far more efficiently. In short, orchestration enables agents to be both powerful and nimble.
Looking ahead: The rise of compound AI systems
It has been the dominant paradigm of the AI sphere over the past few years that intelligence is first built into large foundational models by training and then specialized for real-world use cases through in-context learning. This belief is increasingly under attack, as the AI community continues to produce more and more examples of compound AI systems outperforming the capabilities of monolithic LLMs while being safer, faster, and more cost-effective.
ToolOrchestra represents our first step toward fundamentally intelligent compound AI systems as a paradigm emerging to replace AI monoliths. It is further aligned with our long-term position that small language models are ultimately the key to scalable agentic AI.
To learn more:
- Read our paper.
- Read more on the role of small language models.
- Get in touch with our research team.
- Stay up to date on NVIDIA Nemotron by subscribing to NVIDIA news and following NVIDIA AI on LinkedIn, X, Discord, and YouTube.