GTC Silicon Valley-2019 ID:S9668:No Compromise: Using Unified Memory for Full-Resolution Medical Image AI
We'll describe our experience using Unified Memory for full-resolution medical image AI applications. Although researchers are quickly embracing deep learning-based methods for medical image analysis, constraints on computer hardware and systems software have begun to show. One issue is GPU memory. Although GPUs dramatically accelerate deep neural network training, they have less memory compared with CPUs. That limits the choice of input image size, neural network architecture, and batch size for deep learning-based methods, which leads to inferior results. We'll show how NVIDIA CUDA is an ideal solution because its Unified Memory architecture allows GPUs to access system memory. We'll discuss how lifting size limits for models, batch size and input size affects training throughput and model performance.