After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
Understanding the Interconnected AI Data Centers of the Future
Chris Kawalek, NVIDIA | William Vick, NVIDIA
Today's computing challenges are outpacing the capabilities of traditional data center design. AI requires tremendous processing power that GPUs can easily provide. However, GPU-accelerated systems have different power, cooling, and connectivity needs than traditional IT infrastructure. This creates a growing need to update data center planning principles to keep pace. We'll present strategies for AI data centers of the future. We'll discuss design requirements for space, power, cooling, and networking, we'll talk about edge-to-core data centers using a micro/meso/macro approach, and we'll discuss considerations for when to leverage on-premises, colocation, cloud, hybrid cloud, and AI-as-a-service. Learn how to create an AI “center of excellence” that democratizes the use of AI across your organization and avoids the pitfalls of siloed, one-off implementations that fail to maximize ROI.