GTC 2020: Managing Enterprise AI at scale: Infrastructure & Connectivity
After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
Managing Enterprise AI at scale: Infrastructure & Connectivity
Bryan Hill, Interxion | Patrick Lastennet, Interxion
Hyperscale cloud and content providers have been driving the bulk of AI/DL adoption, and in the process ramping up their DC GPU compute capabilities. Deploying compute and network nodes across the globe to ensure efficient access to data and maximum proximity to users for inference has been a key feature of their strategy for AI enablement. As leading enterprises ramp up their own AI initiatives, and as most contemplate hybrid and multi-cloud architectures, enterprises must now navigate a complex topology of distributed networks and DCs (from public hyperscalers to private on-premeses and third-party co-location). The choices they make now will dictate their future capability to reap the extraordinary benefits that come with AI and DL. In this session, Interxion will outline the strategic value of highly connected data centers and how they are key to enable a three-tier edge-to-core architecture necessary for future enterprise AI at scale.