Deploying large language models (LLMs) poses a challenge in optimizing inference efficiency. In particular, cold start delays—where models take significant time to load into GPU memory—can impact user experience and scalability. Increasingly, complex production environments highlight the need for efficient model loading. These models often require tens to hundreds of gigabytes of memory, causing latency and resource challenges when scaling to meet unpredictable demand. Cold start delays impact end-user experience and operational efficiency.
This post introduces the NVIDIA Run:ai Model Streamer, an open source Python SDK designed to mitigate these issues by concurrently reading model weights from storage and streaming them directly into GPU memory. We benchmarked it against the vLLM default Hugging Face (HF) Safetensors Loader and CoreWeave Tensorizer on local SSDs and Amazon S3.
The experiments explained in this post show that the NVIDIA Run:ai Model Streamer significantly reduces model loading times, lowering cold start latency even in cloud environments. It also remains compatible with the Safetensor format, avoiding weight conversion. Our findings emphasize storage choice and concurrent streaming for efficient LLM deployment. Specifically, to improve inference performance, use the NVIDIA Run:ai Model Streamer to reduce cold-start latency, saturate your storage throughput, and accelerate time-to-inference.
How is a model loaded to a GPU for inference?
To provide some background information, this section explains the two main steps involved in loading a machine learning model into GPU memory for inference: reading weights from storage into CPU memory, and transferring them to the GPU. Understanding this process is key to optimizing inference latency, especially in large-scale or cloud-based deployments.
- Reading weights from storage to CPU memory: The model’s weights are loaded from storage into CPU memory. Weights can be in various formats such as .pt, .h5, and .safetensors, or in custom formats; storage can be local, cluster-wide, or in the cloud. Note that the .safetensors format is used for the purposes of this post due to its wide adoption. However, other formats may be used elsewhere.
- Moving the model to GPU: The model’s parameters and relevant tensors are transferred to GPU memory.
Loading models from cloud-based storage such as Amazon S3 often involves an extra step: the weights are first downloaded to local disk before being moved into CPU and then GPU memory.
Traditionally, these steps occur sequentially, making model loading times one of the most significant bottlenecks when scaling inference.
How does the Model Streamer work?
Model Streamer is an SDK with a high-performance C++ backend designed to accelerate model loading into GPUs from various storage sources (for example, network file systems, cloud, local disks, and so on). It uses multiple threads to read tensors concurrently from a file in object or file storage, to a dedicated buffer in the CPU memory. Each tensor has an identifier, enabling simultaneous reading and transfer: while some tensors are read from storage to CPU, others are moved from CPU to GPU.
The tool takes full advantage of the fact that GPU and CPU have separate subsystems. GPUs access CPU memory directly through PCIe without CPU intervention, allowing real-time overlap of storage reads and memory transfers. Experiments were run on an AWS g5.12xlarge instance with NVIDIA A10G GPUs and 2nd Gen AMD EPYC CPUs—offering balanced architecture for high-throughput parallel data handling.
Key features of the Model Streamer include:
- Concurrency: Multiple threads read model weight files in parallel, including support for splitting large tensors.
- Balanced workload for reading: Work is distributed based on tensor size to saturate storage bandwidth.
- Support for multiple storage types: Works with SSDs, remote storage, and cloud object stores like S3.
- No tensor format conversion: Supports Safetensors natively, avoiding conversion overhead.
- Easy integration: Offers a Python API and an iterator similar to Safetensors but with concurrent background reading. Integrates easily with inference engines like vLLM and TGI.
For more details about setup and usage, see the Model Streamer documentation.
How does the HF Safetensors Loader work?
The HF Safetensors Loader is an open source utility that provides a safe and fast format for saving and loading multiple tensors. It uses a memory-mapped file system to minimize data copying. On a CPU, tensors are directly mapped into memory. On a GPU, it creates an empty tensor with PyTorch, then moves the tensor data using cudaMemcpy, facilitating a zero-copy loading process.
How does the CoreWeave Tensorizer work?
CoreWeave Tensorizer is an open source tool that serializes model weights and their corresponding tensors into a single file. Instead of loading an entire model into RAM before moving it to the GPU, Tensorizer streams the model data tensor by tensor from an HTTP/HTTPS or S3 source.
Where loading meets inference engines: Loading weights with vLLM
Model serving is not complete without an inference engine. There are many inference engines and servers that one can utilize. This post focuses on vLLM and its model loading capabilities. For the benchmarking study, we only utilized vLLM.
The vLLM framework uses the HF safetensors model loading as default. Additionally, it supports CoreWeave Tensorizer to load models from S3 endpoints. However, note that the Tensorizer library requires converting weights from safetensors format to tensorizer format.
Comparing model loader performance across three storage types
We compared the performance of different model loaders (NVIDIA Run:ai Model Streamer, CoreWeave Tensorizer, and HF Safetensors Loader) across three storage types:
- Experiment #1: GP3 SSD – Measured model loading times with various loaders.
- Experiment #2: IO2 SSD – Tested the same loaders on IO2 SSD to evaluate the impact of higher IOPS and throughput.
- Experiment #3: Amazon S3 – Compared loaders in a cloud storage; Safetensors Loader excluded as it does not support S3.
- Experiment #4: vLLM with different loaders – Integrated Model Streamer into vLLM to measure full load and readiness times across storage types, comparing it to default HF Safetensors Loader and Tensorizer. Safetensors Loader excluded from S3 tests.
All tests ran under cold-start conditions to avoid cache effects. For S3, a minimum two-minute wait between tests ensured accuracy. Tensorizer experiments used models serialized per the Tensorizer recipe, and benchmarking followed their benchmarking recipe, both without optional hashing.
Experiment setup
The experiments were conducted using the setup outlined in Table 1.
Model |
Llama 3 8B, an LLM weighing 15 GB, stored in a single Safetensors format |
Hardware |
AWS g5.12xlarge instance featuring four NVIDIA A10G GPUs (only one GPU was used for all tests to maintain consistency) |
Software stack |
CUDA 12.4 vLLM 0.5.5 (Transformers 4.44.2) NVIDIA Run:ai Model Streamer 0.6.0 Tensorizer 2.9.0 Transformers 4.45.0.dev0 Accelerate 0.34.2 |
Storage types |
GP3 SSD: 750 GB, 16K IOPS, 1,000 MiB/s IO2 SSD: 500 GB, 100K IOPS, 4,000 MiB/s Amazon S3: Same AWS region as instance to minimize latency |
For the experiments involving Tensorizer, the same model was serialized into the Tensorizer proprietary tensor format using the recipe provided by the Tensorizer framework.
Experiment #1 results: GP3 SSD
In this initial experiment, we compared the loading performance of different model loaders using GP3 SSD storage. We evaluated the impact of concurrency on the Model Streamer (Figure 1) and examined how the number of workers affected Tensorizer. For Model Streamer, increasing concurrency—the number of concurrent threads reading from storage into CPU memory—led to a notable decrease in model loading time.
At concurrency 1, Model Streamer loaded the model in 47.56 seconds, slightly slower than HF Safetensors Loader at 47.99 seconds. With concurrency 16, loading time dropped to 14.34 seconds, maintaining throughput of ~1 GiB/s, the max for GP3 SSD. Beyond that, storage throughput limited further gains.
Tensorizer showed similar behavior. With one worker, loading time was 50.74 seconds, close to Safetensors Loader. With 16 workers, it achieved 16.11 seconds and 984.4 MiB/s throughput—also nearing GP3 SSD bandwidth.
The storage throughput limit of GP3 SSD became the bottleneck for both Model Streamer and Tensorizer, limiting performance. This motivated testing a higher-throughput storage solution in Experiment #2.
Model Streamer | Safetensors Loader | |
Concurrency | Time to load model to GPU (sec.) | Time to load model to GPU (sec.) |
1 | 47.56 | 47.99 |
4 | 14.43 | |
8 | 14.42 | |
16 | 14.34 |
Tensorizer | |
Number of readers | Time to load model to GPU (sec.) |
1 | 50.74 |
4 | 17.38 |
8 | 16.49 |
16 | 16.11 |
32 | 17.18 |
64 | 16.44 |
100 | 16.81 |


Experiment #2: IO2 SSD
For the second experiment, we used IO2 SSD, which offers significantly higher throughput than GP3 SSD. As before, we analyzed the effect of concurrency on Model Streamer (Figure 3) and the number of workers on Tensorizer.
At concurrency 1, Model Streamer and HF Safetensors Loader showed similar loading times of 43.71 seconds and 47 seconds, respectively. However, as we increased concurrency, Model Streamer showed much more pronounced gains compared to GP3 SSD. With concurrency 8, the model was loaded in just 7.53 seconds, making it around 6x faster than the HF Safetensors Loader, which took 47 seconds.
For Tensorizer, the performance also improved significantly. The optimal result was observed with eight workers, achieving a model loading time of 10.36 seconds (Figure 4). Beyond that, adding more workers did not yield further performance improvements, likely due to storage throughput limitations.
Despite the theoretical maximum throughput of 4 GiB/s for IO2 SSD, our experiments consistently hit a ceiling at around 2 GiB/s with Model Streamer and 1.6 GiB/s with Tensorizer. This suggests practical throughput limitations on the AWS infrastructure, rather than the loaders themselves.
Model Streamer | Safetensors Loader | |
Concurrency | Time to load model to GPU (sec.) | Time to load model to GPU (sec.) |
1 | 43.71 | 47 |
4 | 11.19 | |
8 | 7.53 | |
16 | 7.61 | |
20 | 7.62 |
Tensorizer | |
Number of readers | Time to load model to GPU (sec.) |
1 | 43.85 |
4 | 14.44 |
8 | 10.36 |
16 | 10.61 |
32 | 10.95 |


Experiment #3: S3
For cloud storage, Experiment #3 compared the performance of Model Streamer and Tensorizer using Amazon S3 as the storage medium. Since HF Safetensors Loader does not support S3, it was not included in this benchmarking experiment. For the Tensorizer experiments, we used different numbers of workers and chose the best result for Figure 6, which was achieved with 16 workers in this case.
The results showed that Model Streamer outperformed Tensorizer at all tested concurrency levels. At concurrency 4, Model Streamer loaded the model in 28.24 seconds. As concurrency increased, Model Streamer continued to improve, reaching a load time of 4.88 seconds at concurrency 32, compared to 37.36 seconds for Tensorizer’s best result with 16 workers. This shows that the Model Streamer demonstrates superior efficiency in loading from cloud-based storage.
Note that during these experiments, we observed unexpected caching behavior on AWS S3. When experiments were repeated in quick succession, the model load times significantly improved, likely due to some form of S3 caching mechanism. To ensure consistency and avoid benefiting from this “warm cache,” we introduced at least a 3-minute wait between each test run. The results presented here reflect the times recorded after these intervals, ensuring they represent cold-start conditions.
Model Streamer | |
Concurrency | Time to load model to GPU (sec.) |
4 | 28.24 |
16 | 8.45 |
32 | 4.88 |
64 | 5.01 |
Tensorizer | |
Number of readers | Time to load model to GPU (sec.) |
8 | 86.05 |
16 | 37.36 |
32 | 48.67 |
64 | 41.49 |
80 | 41.43 |


Experiment #4: vLLM with all loaders
This experiment integrated different model loaders into vLLM to measure the total time from model loading to readiness for inference. Model Streamer, Safetensors Loader, and Tensorizer were tested on local storage (GP3 SSD and IO2 SSD), while Hugging Face Safetensors was excluded from S3 since it doesn’t support S3 loading. Tensorizer was tested with vLLM on S3 and compared to Model Streamer.
For each vLLM plus Model Streamer experiment, we used the most optimal concurrency levels determined from earlier experiments. Specifically:
- For GP3 SSD, a concurrency level of 16 was used (Figure 1).
- For IO2 SSD, the concurrency level was also 8 (Figure 3).
- For S3 storage, a higher concurrency level of 32 was used (Figure 5).
Similarly, for the Tensorizer plus vLLM integration, we used the most optimal number of workers determined in previous experiments. Specifically:
- GP3 SSD: 16 workers
- IO2 SSD: 8 workers
- S3: 16 workers
Model Streamer reduced total readiness time to 35.08 seconds on GP3 SSD and 28.28 seconds on IO2 SSD, compared to HF Safetensors Loader at 66.13 seconds and 62.69 seconds, respectively. Tensorizer took 36.19 seconds on GP3 and 30.88 seconds on IO2 SSD, similarly cutting times roughly in half versus Safetensors. On S3, Model Streamer achieved 23.18 seconds total readiness, while Tensorizer required 65.18 seconds.
vLLM with different loaders | |
Loader | Total time until vLLM engine is ready for requests (sec.) |
Safetensors Loader | 66.13 |
Model Streamer | 35.08 |
Tensorizer | 36.19 |
vLLM with different loaders | |
Loader | Total time until vLLM engine is ready for requests (sec.) |
Safetensors Loader | 62.69 |
Model Streamer | 28.28 |
Tensorizer | 30.88 |
vLLM with different loaders | |
Loader | Total time until vLLM engine is ready for requests (sec.) |
Model Streamer | 23.18 |
Tensorizer | 65.18 |

Get started with NVIDIA Run:ai Model Streamer
Cold start latency remains a key bottleneck in delivering responsive, scalable LLM inference, especially in dynamic or cloud-native environments. Our benchmarks demonstrate that the NVIDIA Run:ai Model Streamer significantly accelerates model loading times across local and remote storage, outperforming other common loaders. By enabling concurrent weight loading and GPU memory streaming, it offers a practical and high-impact solution for production-scale inference workloads.
If you’re building or scaling inference systems, especially with large models or cloud-based storage, these results offer immediate takeaways: use the Model Streamer to reduce cold-start latency, saturate your storage throughput, and accelerate time-to-inference. With easy integration into frameworks like vLLM and support for high-concurrency, multi-storage environments, it’s a drop-in optimization that can yield measurable gains. Boost your model loading performance with the NVIDIA Run:ai Model Streamer.