Note: This video may require joining the NVIDIA Developer Program or login

GTC Silicon Valley-2019 ID:S91016:AI Growing Pains: Platform Considerations for Moving from POC to Large-Scale Deployments

Saikumar Devulapalli(Dell),Claudio Fahey(Dell)
As machine learning and deep learning techniques evolve into mainstream adoption, the architectural considerations for platforms that support large-scale production deployments of AI applications change significantly. How do you ensure IO bottlenecks are eliminated to keep your GPU-Powered AI rocket ship fueled with data? How do you address the issues of data gravity, data scaling, and data economics to support Petabyte-sized data sets? How do you simplify data management and minimize business risk and life cycle costs of large scale AI platforms? Well address these questions, discuss key business and architectural requirements for compute and storage, and explain how enterprises can achieve the maximum benefit from AI platforms that align with these requirements. Well also introduce the Dell, EMC, and NVIDIA solution portfolio which makes AI simple, flexible, and accessible.

View the slides (pdf)