GTC 2020: Accelerated Data Science on GPUs using RAPIDS
After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
Accelerated Data Science on GPUs using RAPIDS
John Zedlewski, NVIDIA | Dante Gama Dessavre, NVIDIA | Shankara Rao Thejaswi, NVIDIA | Corey Nolet, NVIDIA
Parallelizing ML workloads on NVIDIA GPUs helps to analyze data and make decisions more efficiently. Come to this session to learn how to use GPUs to accelerate your ML workloads using the cuML RAPIDS project.