For decades, traditional data centers have been vast halls of servers with power and cooling as secondary considerations. The rise of generative AI has changed these facilities into AI factories, flipping the architectural script. Power infrastructure, once an afterthought, is becoming the primary factor that dictates the scale, location, and feasibility of new deployments.
We’re at a critical inflection point, where the industry can no longer rely on incremental improvements, and a fundamental architectural shift is required. This new blueprint must be more efficient, scalable, and capable of managing the power demands of modern AI.
The solution involves a two-pronged approach: implementing an 800 Volts direct current (VDC) power distribution system alongside integrated, multi-timescale energy storage. This isn’t just about keeping the lights on—it’s about building the foundation for the future of computing.
Rising power demands of AI workloads
For years, a significant advance in processor technology meant a roughly 20% rise in power consumption. Today, that predictable curve has been shattered. The driver is the relentless pursuit of performance, enabled by high-bandwidth interconnects like NVIDIA NVLink, which drive thousands of GPUs to operate as a single, monolithic processor.
To achieve the low latency and high bandwidth required, these connections rely on copper cabling. However, copper’s effective reach is limited, creating what can be called a performance-density trap. To build a more powerful AI system, you must pack more GPUs into a smaller physical space. This architectural necessity directly links performance to power density.
The leap from the NVIDIA Hopper to the NVIDIA Blackwell architecture is a good example. While the individual GPU power consumption (TDP) increased by 75%, the growth of the NVLink domain to a 72-GPU system resulted in a 3.4x increase in rack power density. The payoff was a staggering 50x increase in performance, but it also put racks on a path from tens of kilowatts to well over 100, with a megawatt per rack now on the horizon. Delivering this level of power at traditional low voltages, like 54 VDC, is physically and economically impractical. The immense current required would lead to high resistive losses and necessitate an unsustainable volume of copper cabling.
The volatility challenge of synchronous workloads
Beyond sheer density, AI workloads introduce a second, equally formidable challenge: volatility. Unlike a traditional data center running thousands of uncorrelated tasks, an AI factory operates as a single, synchronous system. When training a large language model (LLM), thousands of GPUs execute cycles of intense computation, followed by periods of data exchange, in near-perfect unison.
This creates a facility-wide power profile characterized by massive and rapid load swings. This volatility challenge has been documented in joint research by NVIDIA, Microsoft, and OpenAI on power stabilization for AI training data centers. The research shows how synchronized GPU workloads can cause grid-scale oscillations.
The power draw of a rack can swing from an “idle” state of around 30% to 100% utilization and back again in milliseconds. This forces engineers to oversize components for handling the peak current, not the average, driving up costs and footprint. When aggregated across an entire data hall, these volatile swings—representing hundreds of megawatts ramping up and down in seconds—pose a significant threat to the stability of the utility grid, making grid interconnection a primary bottleneck for AI scaling. Â
A new power delivery architecture
Addressing this multifaceted crisis requires a multifaceted solution. The proposed architectural blueprint is a dual-pronged strategy that tackles scale and volatility challenges by transitioning to 800 VDC power distribution coupled with the deep integration of energy storage.
Advantages of 800 VDC
The most effective way to combat the challenges of high-power distribution is to increase the voltage. Transitioning from a traditional 415 or 480 VAC 3-phase system to an 800 VDC architecture offers significant benefits, including:
Native 800 VDC end-to-end integration
Generating 800 VDC at the facility level and delivering it directly to 800 VDC compute racks eliminates redundant conversions, improving overall power efficiency. This architecture supports high-density GPU clusters, unlocks higher performance per GPU, and enables more GPUs per AI Factory, driving greater compute throughput and revenue potential for partners. It also ensures future scalability beyond 1 MW per rack and seamless interoperability across the AI Factory power ecosystem.
Reduced copper and cost
With 800 VDC, the same wire gauge can carry 157% more power than 415 VAC. Using a simpler three-wire setup (POS, RTN, PE) instead of four for AC, fewer conductors and smaller connectors are required. This reduces copper use, lowers material and installation costs, and eases cable management, critical as rack power inlets scale toward megawatt levels.
Improved efficiency
A native DC architecture eliminates multiple, inefficient AC-to-DC conversion steps that occur in traditional systems, where end-to-end efficiency can be less than 90%. This streamlined power path boosts efficiency and reduces waste heat.
Simplified and more reliable architecture
A DC distribution system is inherently simpler, with fewer components like transformers and phase-balancing equipment. This reduction in complexity leads to fewer potential points of failure and increases overall system reliability.
This is not uncharted territory. The electric vehicle and utility-scale solar industries have already embraced 800 VDC or higher to improve efficiency and power density, creating a mature ecosystem of components and best practices that can be adapted for the data center.
Reducing the swings with multi-timescale energy storage
While 800 VDC solves the efficiency-at-scale problem, it doesn’t address workload volatility. For that, energy storage must be treated as an essential, active component of the power architecture, not just a backup system. The goal is to create a buffer—a low-pass filter—that decouples the chaotic power demands of the GPUs from the stability requirements of the utility grid.
Because power fluctuations occur across a wide spectrum of timescales, a multi-layered strategy is required using:
- Short-duration storage (milliseconds to seconds): High-power capacitors and supercapacitors are placed close to the compute racks. They react quickly to absorb the high-frequency power spikes and fill the brief valleys created by LLM workload idle periods.
- Long-duration storage (seconds to minutes): Large, facility-level battery energy storage systems (BESS) are located at the utility interconnection. They manage the slower, larger-scale power shifts, such as the ramp-up and ramp-down of entire workloads, and provide ride-through capability during transfers to backup generators.
The 800 VDC architecture is a key enabler for this strategy. Current data center energy storage is connected in line with the AC power delivery. By going to 800 VDC, it becomes easier to combine storage in the most appropriate location.
800 VDC power distribution in next-generation AI factories

Next-generation AI factories will transition from today’s AC distribution to an 800 VDC distribution model. Today’s architecture involves multiple power conversion stages. Utility-supplied medium voltage (e.g., 35 kVAC) is stepped down to low voltage (e.g., 415 VAC). This power is then conditioned by an AC UPS and distributed to compute racks via PDUs and busways. Within each rack, multiple PSUs convert the 415 VAC to 54 VDC, which is then distributed to individual compute trays for further DC-to-DC conversions.
The future vision centralizes all AC-to-DC conversion at the facility level, establishing a native DC data center. In this approach, medium-voltage AC is directly converted to 800 VDC by large, high-capacity power conversion systems. This 800 VDC is then distributed throughout the data hall to the compute racks. Architecture streamlines the power train by eliminating layers of AC switchgear, transformers, and PDUs. It maximizes white space for revenue-generating compute, simplifies the overall system, and provides a clean, high-voltage DC backbone for direct integration of facility-level energy storage.
The transition to a fully realized 800 VDC architecture will occur in phases, giving the industry time to adapt and the component ecosystem to mature.

The NVIDIA MGX architecture will evolve with the upcoming NVIDIA Kyber rack architecture, which is designed to use this new 800 VDC architecture (see Figure 2). Power is distributed at a high voltage directly to each compute node, where a late-stage, high-ratio 64:1 LLC converter efficiently steps it down to 12 VDC immediately adjacent to the GPU. This single-stage conversion is more efficient and occupies 26% less area than traditional multi-stage approaches, freeing up valuable real estate near the processor.
The path forward: a call for collaboration
This transformation can’t be accomplished in a vacuum. It requires urgent, focused, and industry-wide collaboration. Organizations like the Open Compute Project (OCP) provide a vital forum for developing the open standards to ensure interoperability, accelerate innovation, and reduce costs for the entire ecosystem. The industry must align on common voltage ranges, connector interfaces, and safety practices for 800 VDC environments.
To accelerate adoption, NVIDIA is collaborating with key industry partners across the data center electrical ecosystem, including:
- Silicon providers: AOS, Analog Devices, Efficient Power Conversion, Infineon Technologies, Innoscience, MPS, Navitas, onsemi, Power Integrations, Renesas, Richtek, ROHM, STMicroelectronics, Texas Instruments.
- Power system components: Bizlink, Delta, Flex, Lead Wealth, LITEON, Megmeet.
- Data center power systems: ABB, Eaton, GE Vernova, Heron Power, Hitachi Energy, Mitsubishi Electric, Schneider Electric, Siemens, Vertiv.
We’re publishing the technical whitepaper 800 VDC Architecture for Next-Generation AI Infrastructure and presenting details at the 2025 OCP Global Summit. Any company interested in supporting the 800 VDC Architecture can contact us for more information.