Learning to Optimize for Structured Output Spaces

Date/Location Information
Seminar Series: 
CCDC Seminar
Quarter: 
2017b Spring
Talk Date: 
04/14/2017 - 3:00pm - 4:00pm
Room: 
Webb 1100
Speaker Information
Speaker Photograph: 
Speaker name: 
Yisong Yue
Speaker Title: 
Assistant Professor
Speaker Organization: 
Caltech
Speaker Department: 
Computing and Mathematical Sciences
Speaker Short Biography: 
Yisong Yue is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. He was previously a research scientist at Disney Research. Before that, he was a postdoctoral researcher in the Machine Learning Department and the iLab at Carnegie Mellon University. He received a Ph.D. from Cornell University and a B.S. from the University of Illinois at Urbana-Champaign. Yisong's research interests lie primarily in the theory and application of statistical machine learning. He is particularly interested in developing novel methods for spatiotemporal reasoning, structured prediction, interactive learning systems, and learning with humans in the loop. In the past, his research has been applied to information retrieval, recommender systems, text classification, learning from rich user interfaces, analyzing implicit human feedback, data-driven animation, behavior analysis, sports analytics, policy learning in robotics, and adaptive routing & allocation problems.
Talk Abstract: 

In many settings, predictions must be made over structured output spaces. Examples include both discrete structures such as sequences and clusterings, as well as continuous ones such as trajectories. The conventional machine learning approach to such "structured prediction" problems is to learn over a holistically pre-specified structured model class (e.g., via conditional random fields or structural SVMs). In this talk, I will discuss recent work along an alternative direction of using learning reductions, or "learning to optimize".
In learning to optimize, the goal is to reduce the structured prediction problem into a sequence of standard prediction problems that can be solved via conventional supervised learning. Such an approach is attractive because it can easily leverage powerful function classes such as random forests and deep neural nets. The main challenge lies in identifying a good learning reduction that is both principled and practical. I will discuss two projects in detail: contextual submodular optimization, and smooth online sequence prediction.
This is joint work with Stephane Ross, Robin Zhou, Hoang Le, Jimmy Chen, Debadeepta Dey, Andrew Kang, Drew Bagnell, Jim Little, and Peter Carr.