After clicking “Watch Now” you will be prompted to login or join.
Creating AI Workgroups within the Enterprise: New Best Practices for Developers and Sys Admins
Michael Balint, NVIDIA | Markus Weber, NVIDIA
GTC 2020
Multi-GPU systems have proven to be excellent resources for deep learning and machine learning teams within small and large organizations. What are the best practices for extending AI compute power to these teams without needing to build and manage a data center? We'll start with practical tips on how multiple users can share a single system (such as an NVIDIA DGX Station) and scale to more advanced concepts of multi-node, multi-user model training and deployment. Learn how teams building powerful AI applications might not need to own servers or depend on data center access; how to leverage best practices involving containers, orchestration, monitoring, and scheduling tools; see demos of how to set up your AI workgroup with ease; and learn best practices for AI developer productivity.