Posts by Ekin Karabulut
AI Platforms / Deployment
Sep 16, 2025
Reducing Cold Start Latency for LLM Inference with NVIDIA Run:ai Model Streamer
Deploying large language models (LLMs) poses a challenge in optimizing inference efficiency. In particular, cold start delays—where models take significant...
13 MIN READ
AI Platforms / Deployment
Sep 02, 2025
Cut Model Deployment Costs While Keeping Performance With GPU Memory Swap
Deploying large language models (LLMs) at scale presents a dual challenge: ensuring fast responsiveness during high demand, while managing the costs of GPUs....
6 MIN READ
AI Platforms / Deployment
Apr 01, 2025
NVIDIA Open Sources Run:ai Scheduler to Foster Community Collaboration
Today, NVIDIA announced the open-source release of the KAI Scheduler, a Kubernetes-native GPU scheduling solution, now available under the Apache 2.0 license....
10 MIN READ