After clicking “Watch Now” you will be prompted to login or join.


WATCH NOW



 
Click “Watch Now” to login or join the NVIDIA Developer Program.

WATCH NOW

Training and Inferencing at Scale, Across Node and Cluster Borders with Optimized Software and Hardware Stack

Zvonko Kaiser, Red Hat

GTC 2020

The demand for computational power for AI/ML workloads keeps rising. While it is easy to burst out work into the cloud, costs can quickly add up for every spun-up instance. Learn how you can reduce and efficiently manage these computational costs by optimizing the hardware and software stack to fully leverage the features a node in a cluster provides. We'll discuss the latest Kubernetes features for Pod hardware affinity and NUMA awareness, as well as leveraging operators for your AI/ML deployments.




View More GTC 2020 Content