Monday, November 13 Schedule

0800 - 0845
0845 - 0900
Welcome and Logistics
0900 - 1000
Rodney Brooks, MIT
1000 - 1020
Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics
Nishant Shukla, UCLA; Yunzhong He, UCLA; Frank Chen, UCLA; Song-Chun Zhu, UCLA
1020 - 1040
Learning a visuomotor controller for real world robotic grasping using simulated depth images
Ulrich Viereck, Northeastern University, Andreas ten Pas, Northeastern University, Kate Saenko, Boston University, Robert Platt, Northeastern University
1040 - 1100
One-Shot Visual Imitation Learning via Meta-Learning
Chelsea Finn, UC Berkeley; Tianhe Yu, UC Berkeley; Tianhao Zhang, UC Berkeley; Pieter Abbeel, UC Berkeley; Sergey Levine, UC Berkeley
1100 - 1130
Coffee Break
1130 - 1136
Learning Partially Contracting Dynamical Systems from Demonstrations
Harish Ravichandar, University of Connecticut; Iman Salehi, University of Connecticut; Ashwin Dani, University of Connecticut
1136 - 1142
Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task
Stephen James, Imperial College London; Andrew J. Davison, Imperial College London; Edward Johns, Imperial College London
1142 - 1148
1148 - 1154
Fast Residual Forests: Rapid Ensemble Learning for Semantic Segmentation
Yan Zuo, Monash University; Tom Drummond, Monash University
1154 - 1200
Adaptable Pouring: Teaching Robots Not to Spill using Fast but Approximate Fluid Simulation
Tatiana Lopez-Guevara, Heriot-Watt University and University of Edinburgh; Nicholas K. Taylor, Heriot-Watt University; Michael U. Gutmann, University of Edinburgh; Subramanian Ramamoorthy, University of Edinburgh; Kartic Subr, University of Edinburgh
1200 - 1206
Improved Adversarial Systems for 3D Object Generation and Reconstruction
Edward J. Smith, McGill University; David Meger, McGill University
1206 - 1212 
Optimizing Long-term Predictions for Model-based Policy Search
Andreas Doerr, BCAI, MPI-IS AMD; Christian Daniel, BCAI; Duy Nguyen-Tuong, BCAI; Alonso Marco, MPI-IS AMD; Stefan Schaal, MPI-IS AMD, USC; Marc Toussaint, MLR; Sebastian Trimpe, MPI-IS AMD
1212 - 1218
Learning Robotic Manipulation of Granular Media
Connor Schenck, University of Washington & Google Inc.; Jonathan Tompson, Google Inc.; Sergey Levine, University of California Berkeley & Google Inc.; Dieter Fox, University of Washington
1218 - 1224
Predictive-State Decoders: Augmenting Recurrent Networks for Better Filtering, Imitation, and Reinforcement Learning
Arun Venkatraman, Carnegie Mellon University; Nick Rhinehart, Carnegie Mellon University; Wen Sun, Carnegie Mellon University; Byron Boots, Georgia Institute of Technology; Kris Kitani, Carnegie Mellon University; Drew Bagnell, Carnegie Mellon University
1224 - 1330
1330 - 1430
Stefanie Tellex, Brown University
1430 - 1450 
Opportunistic Active Learning for Grounding Natural Language Descriptions
Jesse Thomason, University of Texas at Austin; Aishwarya Padmakumar, University of Texas at Austin; Jivko Sinapov, Tufts University; Justin Hart, University of Texas at Austin; Peter Stone, University of Texas at Austin; Raymond J. Mooney, University of Texas at Austin
1450 - 1510 
Towards Robust Skill Generalization: Unifying Learning from Demonstration and Motion Planning
Muhammad Asif Rana, Georgia Tech; Mustafa Mukadam, Georgia Tech; S. Reza Ahmadzadeh, Georgia Tech; Sonia Chernova, Georgia Tech; Byron Boots, Georgia Tech
1510 - 1530 
Learning Robot Objectives from Physical Human Interaction
Andrea Bajcsy, UC Berkeley; Dylan P. Losey, Rice University; Marcia K. O'Malley, Rice University; Anca D. Dragan, UC Berkeley
1530 - 1600
Coffee Break
1600 - 1606
Active Incremental Learning of Robot Movement Primitives
Guilherme Maeda TUDa/ATR; Marco Ewerton, TUDa; Takayuki Osa, U.Tokyo; Baptiste Busch, Inria; Jan Peters, TUDa/MPI
1606 - 1612
DART: Noise Injection for Robust Imitation Learning
Michael Laskey, UC Berkeley; Jonathan Lee, UC Berkeley; Roy Fox, UC Berkeley; Anca Dragan, UC Berkeley; Ken Goldberg, UC Berkeley
1612 - 1618
Bayesian Interaction Primitives: A SLAM Approach to Human-Robot Interaction
Joseph Campbell, Arizona State University; Heni Ben Amor, Arizona State University
1618 - 1624
Hierarchical Reinforcement Learning with Parameters
Maciej Klimek,; Henryk Michalewski, Institute of Mathematics of the Polish Academy of Sciences and; Piotr Miłoś, University of Warsaw and
1624 - 1630
DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations
Sanjay Krishnan* (UC Berkeley), Roy Fox* (UC Berkeley), Ion Stoica (UC Berkeley), Ken Goldberg (UC Berkeley)
1630 - 1636
Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments
John Martin, Stevens Institute of Technology; Brendan Englot, Stevens Institute of Technology
1636 - 1642
Neural Task Programming: Learning to Generalize Across Hierarchical Tasks
Danfei Xu, Stanford University; Suraj Nair, California Institute of Technology; Yuke Zhu, Stanford University ; Julian Gao, Stanford University; Animesh Garg, Stanford University; Li Fei-Fei, Stanford University ; Silvio Savarese, Stanford University
1642 - 1648
Most Likely Expected Improvement for Automatic Prior Selection in Data-Efficient Direct Policy Search
Remi Pautrat, Inria Nancy Grand-Est; Konstantinos Chatzilygeroudis, Inria Nancy Grand-Est; Jean-Baptiste Mouret, Inria Nancy Grand-Est
1648 - 1654
Burn-In Demonstrations for Multi-Modal Imitation Learning
Alex Kuefler, Stanford University; Mykel Kochenderfer, Stanford University
1654 - 1700
ALAN: Adaptive Learning for Multi-Agent Navigation
Julio Godoy, Universidad de Concepción; Tiannan Chen, University of Minnesota; Stephen J. Guy, University of Minnesota; Ioannis Karamouzas, Clemson University; Maria Gini, University of Minnesota
1700 - 1800
Poster Session
1800 - 2000