Companies across nearly all industries are exploring how to use GPU-powered deep learning to extract insights from big data. From self-driving cars to disease-detecting mirrors, the use cases for deep learning is expanding by the day. Since computer scientist Geoff Hinton started using GPUs to train his neural networks, researchers are applying the technology to tough modeling problems in the real world.
Alex Woodie of Datanami recently interviewed Will Ramey, Accelerated Computing senior product manager at NVIDIA, to get an insight on how NVIDIA is unlocking the potential of GPU-powered deep learning applications.
The article also mentions the new software by NVIDIA aimed at helping data scientists build deep learning systems powered by GPUs will be shipped this month, which includes version 3 of the cuDNN library, and DIGITS 2.
Read more on Datanami >>
Related resources
- GTC session: Insights from NVIDIA Research
- GTC session: Global Innovators: Scaling Innovation With NVIDIA AI
- GTC session: Data Patterns for NVIDIA AI: NVIDIA DGX SuperPODs, NVIDIA DGX BasePOD, Analytics, and Deployment (Presented by IBM)
- NGC Containers: ASR Parakeet CTC Riva 1.1b
- SDK: MONAI Cloud API
- Webinar: Accelerate AI Model Inference at Scale for Financial Services
Leave a Reply
You must be logged in to post a comment.