Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9249:Practical Machine Learning Interpretability Techniques
This presentation illustrates how to combine innovations from several sub-disciplines of machine learning research to train understandable, fair, trustable, and accurate predictive modeling systems. Techniques from research into fair models, directly interpretable Bayesian or constrained machine learning models, and post-hoc explanations can be used to train transparent, fair, and accurate models and make nearly every aspect of their behavior understandable and accountable to human users. Additional techniques from fairness research can be used to check for sociological bias in model predictions and to preprocess data and post-process predictions to ensure the fairness of predictive models. Finally, applying new testing and debugging techniques, often inspired by best practices in software engineering, can increase the trustworthiness of model predictions on unseen data. Together these techniques create a new and truly human-friendly type of machine learning suitable for use in business- and life-critical decision support.