After clicking “Watch Now” you will be prompted to login or join.


WATCH NOW



 
Click “Watch Now” to login or join the NVIDIA Developer Program.

WATCH NOW

Scaling Hyper-Parameter Optimization Using RAPIDS, Dask, and Kubernetes

Eric Harper, NVIDIA | Miro Enev, NVIDIA

GTC 2020

We'll show you how to scale end-to-end data-science workflows on GPUs using RAPIDS, Dask, and Kubernetes. Specifically, we build a dynamically sized dataset and use it to run XGBoost hyper-parameter optimization using a particle-swarm strategy while scaling the number of GPUs in our cluster. We'll highlight best practices for scaling within a node (Dask) and across nodes (Dask plus Kubernetes), and demonstrate that the entire data-science workflow can be done on the GPU.
The session content is presented as a Jupyter notebook enabling:
1) adjusting the workflow (e.g., size of synthetic dataset, number of workers),
2) interactively exploring data and model predictions, and
3) monitoring large scale compute via Dask dashboards.
Having walked through these ideas, participants will be able to further explore and extend the content in their own environments.




View More GTC 2020 Content