Inference has emerged as the new frontier of complexity in AI. Modern models are evolving into agentic systems capable of multi-step reasoning, persistent memory, and long-horizon context—enabling them to tackle complex tasks across domains such as software development, video generation, and deep research. These workloads place unprecedented demands on infrastructure, introducing new challenges in compute, memory, and networking that require a fundamental rethinking of how inference is scaled and optimized.
Among these challenges, processing massive context for a specific class of workloads has become increasingly critical. In software development, for example, AI systems must reason over entire codebases, maintain cross-file dependencies, and understand repository-level structure—transforming coding assistants from autocomplete tools into intelligent collaborators. Similarly, long-form video and research applications demand sustained coherence and memory across millions of tokens. These requirements are pushing the boundaries of what current infrastructure can support.
To address this shift, the NVIDIA SMART framework provides a path forward—optimizing inference across scale, multidimensional performance, architecture, ROI, and the broader technology ecosystem. It emphasizes a full-stack disaggregated infrastructure that enables efficient allocation of compute and memory resources. Platforms like NVIDIA Blackwell and NVIDIA GB200 NVL72, combined with NVFP4 for low-precision inference and open source software such as NVIDIA TensorRT-LLM and NVIDIA Dynamo, are redefining inference performance across the AI landscape.
This blog explores the next evolution in disaggregated inference infrastructure and introduces NVIDIA Rubin CPX—a purpose-built GPU designed to meet the demands of long-context AI workloads with greater efficiency and ROI.
Disaggregated inference: a scalable approach to AI complexity
Inference consists of two distinct phases: the context phase and the generation phase, each placing fundamentally different demands on infrastructure. The context phase is compute-bound, requiring high-throughput processing to ingest and analyze large volumes of input data to produce the first token output result. In contrast, the generation phase is memory bandwidth-bound, relying on fast memory transfers and high-speed interconnects, such as NVLink, to sustain token-by-token output performance.
Disaggregated inference enables these phases to be processed independently, enabling targeted optimization of compute and memory resources. This architectural shift improves throughput, reduces latency, and enhances overall resource utilization (Figure 1).

However, disaggregation introduces new layers of complexity, requiring precise coordination across low-latency KV cache transfers, LLM-aware routing, and efficient memory management. NVIDIA Dynamo serves as the orchestration layer for these components, and its capabilities played a pivotal role in the latest MLPerf Inference results. Learn how disaggregation with Dynamo on GB200 NVL72 set new performance records.Â
To capitalize on the benefits of disaggregated inference—particularly in the compute-intensive context phase—specialized acceleration is essential. Addressing this need, NVIDIA is introducing Rubin CPX GPU—a purpose-built solution designed to deliver high-throughput performance for high-value long-context inference workloads while seamlessly integrating into disaggregated infrastructure.
Rubin CPX: built to accelerate long-context processing
The Rubin CPX GPU is designed to enhance long-context performance, complementing existing infrastructure while delivering scalable efficiency and maximizing ROI in context-aware inference deployments. Rubin CPX, built with the Rubin architecture, delivers breakthrough performance for the compute-intensive context phase of inference. It features 30 petaFLOPs of NVFP4 compute, 128 GB of GDDR7 memory, hardware support for video decoding and encoding, and 3x attention acceleration (compared to NVIDIA GB300 NVL72).Â
Optimized for efficiently processing long sequences, Rubin CPX is critical for high-value inference use cases like software application development and HD video generation. Designed to complement existing disaggregated inference architectures, it enhances throughput and responsiveness while maximizing ROI for large-scale generative AI workloads.
Rubin CPX works in tandem with NVIDIA Vera CPUs and Rubin GPUs for generation-phase processing, forming a complete, high-performance disaggregated serving solution for long-context use cases. The NVIDIA Vera Rubin NVL144 CPX rack integrates 144 Rubin CPX GPUs, 144 Rubin GPUs, and 36 Vera CPUs to deliver 8 exaFLOPs of NVFP4 compute—7.5× more than the GB300 NVL72—alongside 100 TB of high-speed memory and 1.7 PB/s of memory bandwidth, all within a single rack.
Using NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet, paired with NVIDIA ConnectX-9 SuperNICs and orchestrated by the Dynamo platform, the Vera Rubin NVL144 CPX is built to power the next wave of million-token context AI inference workloads—cutting inference costs and unlocking advanced capabilities for developers and creators worldwide.
At scale, the platform can deliver 30x to 50x return on investment, translating to as much as $5B in revenue from a $100M CAPEX investment—setting a new benchmark for inference economics. By combining disaggregated infrastructure, acceleration, and full-stack orchestration, Vera Rubin NVL144 CPX redefines what’s possible for enterprises building the next generation of generative AI applications.

SummaryÂ
The NVIDIA Rubin CPX GPU and the NVIDIA Vera Rubin NVL144 CPX rack exemplify the SMART platform philosophy—delivering scalable, multi-dimensional performance, and ROI through architectural innovation and ecosystem integration. Powered by NVIDIA Dynamo and built for massive context, it sets a new standard for full-stack AI infrastructure that creates new possibilities for workloads, including advanced software coding and generative video.Â
Learn more about NVIDIA Rubin CPX.