GTC Silicon Valley-2019: Higher Performance with Less Data Via Capsule Networks and Active Learning
Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9290:Higher Performance with Less Data Via Capsule Networks and Active Learning
Chris Aasted(Lockheed Martin)
Learn how the combination of capsule networks, active learning, and transfer learning can reduce the number of training samples required to add a new label to an existing classifier. We will detail how we developed our network architecture, training data selection algorithm, and discuss their implementation in Python and TensorFlow's Keras layers with GPU acceleration. We'll also discuss our results from applying this approach to image-classification tasks and how they compare to a standard convolutional neural network approach.