Why did you want to become an NVIDIA DLI University Ambassador?

I was always fascinated by machine learning, even the name neural networks sounded cool. I read a paper by Honglak Lee (University of Michigan) which demonstrated how deep neural networks learn filters which are eerily similar to the behavior in the mammalian visual cortex. His neural networks were not encouraged, nor expected to learn this behavior- but this behavior automatically emerged as the network continued to learn in an unsupervised fashion! Wow, I thought this was amazing, and from that point forward I was hooked. Deep learning has taken machine learning to a new automagical level, and NVIDIA has been central to this revolution. I love teaching, as so DLI seemed like a perfect fit.

Tell us about the attendees of your DLI workshops.

Attendees come from a broad spectrum, from managers to scientists, to hobbyists. The one common theme seems to be a fascination with technology and a desire to learn. As Andrew Ng states, “AI is the new electricity”. Attendees want to know more- they want to get on this bandwagon before getting left behind.

How do DLI workshops help you become a better researcher and professor?

I love learning. As an associate professor I learn from students just like they learn from me. DLI workshops give me the chance to meet others, to show them what I have learned, and to listen and learn from them. It is a chance to hear and understand problems from a different perspective, and brainstorm new solutions.

What are your AI plans in the future?

We are entering a new era of digital learning, digital networking, and digital presence. New technology and new players are continually emerging. Two hot topics I find compelling: 1) CNNs have forever changed our lives, but only work on gridded structures. The vast majority of the words scientific problems are heterogenous unstructured data. Methods such as graph CNNs afford all the advances CNNs have given to gridded structures, but to unstructured problems. 2) Deep learning and backpropogation work great, but require vast amounts of data and are susceptible to failure on unforeseen samples. Creative usage of data synthesis and self-supervision learning curriculums drastically reduce the amount of training data and make learned models more predictable and less susceptible to adversarial examples. The evolution of science and innovation only accelerates over time, and I am anxious to embrace it.