Large language models (LLMs) are revolutionizing the financial trading landscape by enabling sophisticated analysis of vast amounts of unstructured data to generate actionable trading insights. These advanced AI systems can process financial news, social media sentiment, earnings reports, and market data to predict stock price movements and automate investment strategies with unprecedented accuracy.
The Strategic Technology Analysis Center (STAC) has been developing benchmarks for the workloads key to the financial industry for over 15 years. They have now developed the STAC-AI benchmark to help companies assess the end-to-end retrieval-augmented generation (RAG) and LLM inference pipeline.
This post presents the results achieved on the STAC-AI LANG6 benchmark across multiple NVIDIA platforms. We will also share some recommendations on how any user can benchmark NVIDIA TensorRT LLM according to the specifications of their dataset.
STAC-AI LANG6 (Inference-Only) benchmark
In the broader context of a RAG pipeline, STAC-AI LANG6 is the part of the benchmark focusing on LLM inference performance. The benchmark tests the hardware and software stack on the Llama 3.1 8B Instruct and Llama 3.1 70B Instruct models in combination with the following custom datasets:
- EDGAR4: The prompts are summarizations of the relationship of a company to one of various physical and financial concepts (such as commodities, currencies, interest rates, and real estate sectors). It uses EDGAR 10‑K paragraphs from a single security filing for a single year. The input/output sequence length aims to model medium-length requests.
- EDGAR5: Questions covering several different aspects of a complete 10‑K filing. The document type is the complete text of a single EDGAR 10‑K filing. The input/output sequence length aims to model long-context requests.
These datasets, based on EDGAR filings, model medium and long-context summarization for financial trading and investment advice use cases. The prompts ask the model to perform analysis and summarization of annual reports (10-K filings) for thousands of public companies over the past five years.
The benchmark also tests two different inference scenarios, batch mode and interactive mode:
- Batch (offline) mode: All requests are given at once, and all responses are collected at once. Only throughput is measured.Â
- Interactive (online) mode: Requests arrive at pseudo-random times. The mean arrival rate λ (the average number of requests the system receives every second) can be set to model different usage scenarios. The benchmark collects metrics such as reaction time (RT), words per second per user (WPS/user), and total words per second (WPS), but does not set any constraint on them. RT is analogous to time to first token (TTFT) in other benchmarks, and WPS to tokens/second/user.
Note that interactive mode does not cover the combination of Llama 3.1 70B Instruct with EDGAR5.
The benchmark checks the quality of the output and word count with respect to a control set of LLM-generated responses.
While other benchmarks allow all preprocessing, an important differentiator of STAC-AI is the need to apply chat templates and tokenize requests during inference. Real deployments may prefer to have this work done on the server side to protect their system prompts, thus imposing more load on the CPU.
Hardware and software stack
This post compares two on-premises NVIDIA Hopper-based servers submitted by HPE with a cloud-based NVIDIA Blackwell node.
- The HPE ProLiant Compute DL384 Gen12, powered by the NVIDIA GH200 Grace Hopper Superchip, provides an efficient single-server solution. To see detailed results, refer to the Llama 3.1 8B companion report and Llama 3.1 70B companion report on the STAC website.
- A cloud-based VM provided by Nebius Cloud, based on a single node of an NVIDIA GB200 NVL72 system. The VM has two NVIDIA Grace CPUs, and four NVIDIA Blackwell GPUs fully connected using the NVIDIA NVLink and NVSwitch for maximum network throughput. For details about the NVIDIA GH200 results, see the Llama 3.1 8B companion report and Llama 3.1 70B companion report on the STAC website.
- The latest on‑premises option is the SuperMicro AS -5126GS-TNRT in the two NVIDIA RTX PRO 6000 Blackwell Server Edition configuration, which pairs two Blackwell GPUs in a single server for AI development and deployment. Each RTX PRO 6000 Blackwell GPU includes 96 GB of memory, supplying the node with substantial aggregate GPU memory for larger models, larger batch sizes, or more concurrent jobs within the same system footprint. For details about the results, see the Llama 3.1 8B companion report and Llama 3.1 70B companion report on the STAC website.
As the benchmark requires post-training quantization as part of the benchmarking procedure, the models were quantized using the NVIDIA TensorRT Model Optimizer. To leverage the most performant kernels available for each deployment, quantization was performed to FP8 on NVIDIA Hopper and to NVFP4 on NVIDIA Blackwell.
To achieve the best performance for both Hopper and Blackwell, NVIDIA TensorRT LLM inference framework was used for efficient model execution. These quantized models were run using TensorRT LLM PyTorch runtime for a familiar, native PyTorch development experience while maintaining peak performance.
Benchmarking results on STAC-AI LANG6
Benchmarking results for both batch mode and interactive mode are detailed in this section.
Batch mode
For batch mode, NVIDIA Blackwell delivers significant speedups in all scenarios. Table 1 shows the WPS and requests per second (RPS) achieved.
Note that the NVIDIA GB200 NVL72 results were not audited by STAC.
| Model​ | Dataset​ | 2x GH200 144 GB ​ TensorRT LLM FP8​ | 4x GB200 NVL72 TensorRT LLM NVFP4​ | 2x RTX PRO 6000 NVFP4 | |||
| WPS​ | RPS | WPS | RPS | WPS | RPS | ||
| Llama 3.1 8B​ | EDGAR4​ | 8,237 | 51.5 | 37,480​ | 224​ | 5,500 | 32.9 |
| EDGAR5​ | 304 | 0.784 | 1,112​ | 2.85​ | 138 | 0.345 | |
| Llama 3.1 70B​ | EDGAR4​ | 1,071 | 6.77 | 5,618​ | 35.9​ | 831 | 5.26 |
| EDGAR5​ | 41.4 | 0.119 | 150​ | 0.477​ | 13 | 0.04 | |
The full reports with more details across both interactive and batch modes can be found in the reports published by STAC.
Single-GPU performance was also assessed to account for the different number of GPUs on each system. Although STAC-AI does not measure per-GPU performance, the results shown in Figure 1 illustrate the throughput difference between single GPUs from each of the systems.

Interactive mode
The balance between token economics (dependent on throughput) and user experience (dependent on interactivity metrics such as RT and WPS/user) is a crucial factor in modern LLM inference.
Interactive mode showcases the tradeoff across the interactivity-throughput Pareto front by selecting a range of arrival rates. Interactivity is measured by both RT and WPS/user. To facilitate visualization, the inverse of WPS/user, defined as interword latency (IWL), or \(\frac{1}{WPS/user}\), is used. In the graphs we use the 95th percentile of both metrics.
As seen in Figure 2, GB200 NVL72 achieves a better tradeoff between throughput and both RT and IWL across the board. IWL (solid, lower is better) and RT (dashed, lower is better) are plotted versus interactive-mode throughput across model/dataset scenarios.

Figure 3 shows that, even when operating at a similar percentage of maximum throughput, NVIDIA GB200 NVL72 achieves better RT and IWL in most scenarios. Normalizing the x-axis removes raw throughput advantages and highlights interactivity-at-equal-load.

How to benchmark TensorRT LLM with your custom data
While the STAC benchmark uses proprietary data and metrics, you can benchmark TensorRT LLM against models tailored to your specific dataset characteristics. This tutorial walks you through quantizing a model, preparing your dataset, and running performance benchmarks—all customized for your use case.
Prerequisites:
- A Docker image that includes TensorRT LLM (TensorRT LLM Release, for example).
- An NVIDIA GPU that is large enough to serve your model at the desired quantization level. You can find a support matrix for quantization in TensorRT LLM documentation.
- A Hugging Face account and token, along with access to the gated models of Llama 3.1 8B Instruct or Llama 3.1 70B Instruct. You can set the
HF_TOKENenvironment variable to your token, and all subsequent commands will use this token.
Step 1: Launch the container
The containers maintained by NVIDIA contain all of the required dependencies pre-installed. Change into an empty directory with enough space for the models and their quantizations. You can start the container on a machine with NVIDIA GPUs with the following command. Make sure you specify your Hugging Face token.
docker run --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \
--gpus=all \
-u $(id -u):$(id -g) \
-e USER=$(id -un) \
-e HOME=/tmp \
-e TRITON_CACHE_DIR=/tmp/.triton \
-e TORCHINDUCTOR_CACHE_DIR=/tmp/.inductor_cache \
-e HF_HOME=/workspace/model_cache \
-e HF_TOKEN=<your_huggingface_token> \
--volume "$(pwd)":/workspace \
--workdir /workspace \
nvcr.io/nvidia/tensorrt-llm/release:1.3.0rc2
Step 2: Clone the repositories
Model quantization reduces model size and improves inference speed. Use NVIDIA Model Optimizer to quantize Llama 3.1 8B Instruct to NVFP4 format. First, clone the Model Optimizer repository for the quantization example:
git clone https://github.com/NVIDIA/TensorRT-Model-Optimizer.git -b 0.37.0
Step 3: Quantize the model
Next, execute the Hugging Face example script with the chosen model and quantization format—in this case, Llama 3.1 8B Instruct using NVFP4 quantization.
bash TensorRT-Model-Optimizer/examples/llm_ptq/scripts/huggingface_example.sh \
--model meta-llama/Llama-3.1-8B-Instruct \
--quant nvfp4
Step 4: Generate synthetic data
Use the benchmark utility to generate a synthetic dataset with the token distribution needed for a task. This example creates 30,000 requests with a fixed input sequence length of 2,048, and an output sequence length of 128. Nonzero standard deviations better approximate real traffic, if you have access to that information.
python /app/tensorrt_llm/benchmarks/cpp/prepare_dataset.py \
--stdout \
--tokenizer meta-llama/Llama-3.1-8B-Instruct \
token-norm-dist \
--input-mean 2048 \
--output-mean 128 \
--input-stdev 0 \
--output-stdev 0 \
--num-requests 30000 \
> dataset_2048_128.json
Step 5: Run the benchmark
The trt-llm bench command can run the generated requests in an offline fashion, sending all requests at once to TensorRT LLM runtime (closely matching the STAC-AI batch mode).Â
While some options are available in the CLI API, the full LLM API can be accessed through a YAML file passed with the extra_llm_api_options parameter. For the purposes of this example, enable CUDA Graphs padding. To learn about more options, see the TensorRT LLM API Reference.
cat > llm_options.yml << 'EOF'
cuda_graph_config:
enable_padding: True
EOF
Finally, run the benchmark, specifying the model, the dataset, and the options:
trtllm-bench \
--model meta-llama/Llama-3.1-8B-Instruct \
--model_path /workspace/TensorRT-Model-Optimizer/examples/llm_ptq/saved_models_Llama-3_1-8B-Instruct_nvfp4 \
throughput \
--dataset dataset_2048_128.json \
--backend pytorch \
--extra_llm_api_options llm_options.yml
This will output various metrics such as the request throughput, the tokens/second/GPU, and more.
Get started with TensorRT LLM benchmarking
NVIDIA GB200 NVL72 significantly advanced performance on the STAC-AI LANG6 benchmark, setting a new record for LLM inference in the financial sector. NVIDIA Blackwell delivered up to 3.2x the performance of previous architectures, achieving both higher throughput and consistently maintaining superior interactivity.
Alongside the new record, NVIDIA Hopper continues to deliver strong, valuable results for LLM inference workloads. Even more than three years after its initial release, Hopper proves highly effective in both batch and interactive inference scenarios, maintaining good performance metrics even at high throughput, and confirming its continued relevance for financial institutions.
To dive deeper into setting up and running your own performance evaluations, explore the TensorRT LLM Benchmarking Guide.