Data Science

Meet the Researcher: Alan Fern, Machine Learning and Automated Planning for Sequential Decision Making

‘Meet the Researcher’ is a new series in which we spotlight different researchers in academia who are using GPUs to accelerate their work. This week, we spotlight Alan Fern, a Professor of Computer Science and Associate Head of Research in the School of Electrical Engineering and Computer Science at Oregon State University. 

Fern received a National Science Foundation CAREER award in 2006, is an associate editor of the Machine Learning Journal and Journal of Artificial Intelligence Research, and is regularly an area chair for the NeurIPS, ICML, and AAAI conferences. His research interests span a range of topics in artificial intelligence, including machine learning and automated planning/control

The excerpt below is a summary of a conversation with the NVIDIA team and Fern.

What are your research areas of focus?

My academic areas of focus include machine learning and automated planning for sequential decision making. Some aspects of this work fall under the category of reinforcement learning. At a high level, one of my primary interests is integrating symbolic AI and more recent developments in deep representation learning.

Tell us about your current research projects. 

Some of my main projects these days are: explainable AI in the context of reinforcement learning and vision, robust AI, and reinforcement learning and planning for agile biped robot locomotion.

One of my favorite recent papers in the XAI space is on Learning Finite State Representations of Recurrent Policy Networks. A theoretical paper that I think has important practical implications was recently presented at AAAI is on The Choice Function Framework for Online Policy Improvement.

What problems or challenges does your research address? 

Right now, we can’t always understand the reasoning behind the decisions or recommendations that AI systems make, especially systems that include machine learning components.  Our explainable AI project aims to provide human-understandable explanations of these decisions, with a primary goal of supporting acceptance testing of AI-based software systems. In another thread, current AI systems are very bad at reliably detecting when a situation is novel compared to the training experience, which can lead to overconfidence and dangerous behavior in critical applications. We are working on anomaly detection approaches to identify novel situations and objects, with a focus on maintaining reasonable false positive rates.

How have you used NVIDIA technology either in your current or previous research?

We regularly use GPUs for training deep networks in all of our work, including reinforcement learning for training biped locomotion in simulation and transferring to the real world (sim-to-real) and training reinforcement learning agents in complex real-time strategy game environments for explainability research. 

Tell us about our recent breakthroughs for this work.

We have continually improved our ability for sim-to-real transfer, which is leading to exciting real-world demonstrations of biped locomotion. Our explainability work has demonstrated some very surprising results. For example, it revealed that certain deep reinforcement learning agents end up learning to play Atari video games without actually looking at the screen and when they do look at the screen, they are looking for very different reasons than we would expect. 

What is the (expected) impact of your work on the field/community/world?

Work on explainable AI is critical to enabling the use of AI techniques in applications where the stakes are high. It is critical that we make our best effort to thoroughly test AI-enabled software in such applications and the traditional testing tools are not sufficient. Similarly, work on anomaly detection is critical for reliability in the real world. AI-enabled systems must be able to “raise a flag” when they encounter novel situations where their learned behavior may not be trustworthy. 

What’s next for your research?

I’m very interested in marrying the long line of work on symbolic AI techniques and modern deep representation-learning approaches. These areas have complementary strengths and weaknesses and combining them is an interesting bet for the next generation of AI advances. 

Any advice for new researchers?

Focus on learning the theoretical and scientific foundations of your research area. Focus on explicitly asking and investigating scientific questions in your research that are grounded in those foundations. Remember that we can learn as much, if not more, from careful investigation and reporting of “failures”. It is so easy today to get started with code and instead get immersed in “hacking away” toward the top spot on a leaderboard. 

Learn more>>

Discuss (0)

Tags