Accepted Papers

Accepted Papers

Archival Track

CARLA: An Open Urban Driving Simulator
Alexey Dosovitskiy, Intel Labs; German Ros, Computer Vision Center; Felipe Codevilla, Computer Vision Center; Antonio Lopez, Computer Vision Center (CVC); Vladlen Koltun*, Intel Labs

CORe50: a New Dataset and Benchmark for Continuous Object Recognition
Vincenzo Lomonaco, University of Bologna; davide Maltoni, University of Bologna

Fast Residual Forests: Rapid Ensemble Learning for Semantic Segmentation
Yan Zuo, Monash University; Tom Drummond, Monash University

Active Incremental Learning of Robot Movement Primitives
Guilherme Maeda, TU Darmstadt; Marco Ewerton, TU Darmstadt; Takayuki Osa, University of Tokyo; Baptiste Busch, Inria-Bordeaux; Jan Peters, TU Darmstadt

Deep Kernels for Optimizing Locomotion Controllers
Rika Antonova, KTH; Akshara Rai, Carnegie Mellon University; Christopher Atkeson, Carnegie Mellon University

Efficient Automatic Perception System Parameter Tuning On Site without Expert Supervision
Humphrey Hu, Carnegie Mellon University; George Kantor, Carnegie Mellon University

Opportunistic Active Learning for Grounding Natural Language Descriptions
Jesse Thomason, University of Texas at Austin; Aishwarya Padmakumar, University of Texas at Austin; Jivko Sinapov, University of Texas at Austin; Justin Hart, University of Texas at Austin; Peter Stone, University of Texas at Austin; Raymond Mooney, University of Texas at Austin

Adaptable Pouring: Teaching Robots Not to Spill using Fast but Approximate Fluid Simulation
Tatiana López Guevara, University of Edinburgh & Heriot-Watt University; Nicholas K. Taylor, Heriot-Watt University; Michael U. Gutmann, University of Edinburgh; Subramanian Ramamoorthy, University of Edinburgh; Kartic Subr, University of Edinburgh

Improved Adversarial Systems for 3D Object Generation and Reconstruction
Edward Smith, McGill; David Meger, University of British Columbia

Principal Variety Analysis
Reza Iraji, Colorado State University; Hamidreza Chitsaz, Colorado State University

Towards Robust Skill Generalization: Unifying Learning from Demonstration and Motion Planning
Muhammad Asif Rana, Georgia Institute of Technology; Mustafa Mukadam, Georgia Tech; Seyed Reza Ahmadzadeh, Georgia Institute of Technology; Sonia Chernova, Georgia Institute of Technology; Byron Boots, Georgia Institute of Technology

End-to-End Learning of Semantic Grasping
Eric Jang*, Google; Sudheendra Vijayanarasimhan, google.com; Peter Pastor, [X]; Julian Ibarz, Google; Sergey Levine, UC Berkeley

Aggressive Deep Driving: Combining Convolutional Neural Networks and Model Predictive Control
Paul Drews, Georgia Institute of Technology; Grady Williams, Georgia Institute of Technology; Brian Goldfain, Georgia Institute of Technology; Evangelos Theodorou, Georgia Institute of Technology; James Rehg, Georgia Institute of Technology

Dart: Optimizing Noise Injection in Imitation Learning
Michael Laskey, UC Berkeley; Anca Dragan, UC Berkeley; Jonathan Lee, UC Berkeley; Ken Goldberg, UC Berkeley; Roy Fox, UC Berkeley

Semi-Supervised Haptic Material Recognition for Robots using Generative Adversarial Networks
Zackory Erickson, Georgia Institute of Technology; Sonia Chernova, Georgia Institute of Technology; Charles Kemp, Georgia Institute of Technology

Online Learning with Stochastic Recurrent Neural Networks using Intrinsic Motivation Signals
Daniel Tanneberg, TU Darmstadt; Jan Peters, TU Darmstadt; Elmar Rueckert, TU Darmstadt

Learning Stable Task Sequences from Demonstration with Linear Parameter Varying Systems and Hidden Markov Models
Jose Medina, EPFL; Aude Billard, EPFL

Intention-Net: Integrated Planning and Deep Learning for Autonomous Navigation
Wei Gao, NUS; Karthikk Subramanian, Panasonic R&D

Uncertainty-driven Imagination for Continuous Deep Reinforcement Learning
Gabriel Kalweit, University of Freiburg; Joschka Boedecker, University of Freiburg

The Intentional Unintentional Agent:Learning to Solve Many Continuous Control Tasks Simultaneously
Serkan Cabi, DeepMind; Sergio Gomez Colmenarejo, DeepMind; Matt Hoffman, DeepMind; Misha Denil, DeepMind; Ziyu Wang, DeepMind; Nando de Freitas, DeepMind

Learning Robot Objectives from Physical Human Interaction
Andrea Bajcsy, UC Berkeley; Dylan Losey, Rice University; Marcia O'Malley, Rice University; Anca Dragan, UC Berkeley

Optimizing Long-term Predictions for Model-based Policy Search
Andreas Doerr, MPI-IS, BCAI; Christian Daniel, ; Duy Nguyen-Tuong, ; Alonso Marco, ; Stefan Schaal, ; Marc Toussaint, ; Sebastian Trimpe, MPI for Intelligent Systems

Learning Robotic Manipulation of Granular Media
Connor Schenck, University of Washington; Sergey Levine, UC Berkeley; Jonathan Tompson, Google; Dieter Fox, University of Washington

Learning End-to-end Multimodal Sensor Policies for Autonomous Navigation
Guan-Horng Liu, Carnegie Mellon University; Avinash Siravuru, Carnegie Mellon University; Sai Prabhakar, Carnegie Mellon University; Manuela Veloso, Carnegie Mellon University; George Kantor, Carnegie Mellon University

Sim-to-Real Robot Learning from Pixels with Progressive Nets
Andrei Rusu, DeepMind; Matej Vecerik, DeepMind; Thomas Rothorl, DeepMind; Nicolas Heess, DeepMind; Razvan Pascanu, DeepMind; Raia Hadsell, Google DeepMind

Learning Heuristic Search via Imitation
Mohak Bhardwaj, Carnegie Mellon University; Sanjiban Choudhury, Carnegie Mellon University; Sebastian Scherer, Carnegie Mellon University

Mutual Alignment Transfer Learning
Markus Wulfmeier, Oxford; Ingmar Posner, Oxford; Pieter Abbeel, UC Berkeley

Learning a visuomotor controller for real world robotic grasping using simulated depth images
Ulrich Viereck, Northeastern University; Andreas ten Pas, Northeastern University; Kate Saenko, Boston University; Robert Platt, Northeastern University

Hierarchical Reinforcement Learning with Parameters
Piotr Milos, University of Warsaw; Henryk Michalewski, University of Warsaw; Maciej Klimek, deepsense.io

Learning Deep Grasping Models From Vision and Touch
Roberto Calandra, UC Berkeley; Andrew Owens, UC Berkeley; Manu Upadhyaya, UC Berkeley; Wenzhen Yuan, MIT; Justin Lin, UC Berkeley; Edward Adelson, MIT; Sergey Levine, UC Berkeley

image2mass: Estimating the Mass of an Object from Its Image
Trevor Standley, Stanford University; Ozan Sener, Stanford University; Silvio Savarese, Stanford University

Transferring End-to-End Visiomotor Control from Simulation to Real World for a Multi-Stage Task
Stephen James, Imperial College London; Andrew Davison, Imperial College London; Edward Johns, Imperial College London

Occlusion-Aware Visual Foresight for Self-Supervised Robot Learning
Frederik Ebert, UC Berkeley; Chelsea Finn, UC Berkeley; Alex Lee, UC Berkeley; Sergey Levine, UC Berkeley

One-Shot Visual Imitation Learning via Meta-Learning
Chelsea Finn, UC Berkeley; Tianhe Yu, UC Berkeley; Tianhao Zhang, UC Berkeley; Pieter Abbeel, UC Berkeley; Sergey Levine, UC Berkeley

Learning Partially Contracting Dynamical Systems from Demonstrations
Harish Chaandar Ravichandar, University of Connecticut; Iman Salehi, University of Connecticut; Ashwin Dani, University of Connecticut

Bayesian Interaction Primitives: A SLAM Approach to Human-Robot Interaction
Joseph Campbell, Arizona State University; Heni Ben Amor, Arizona State University

Learning Data-Efficient Rigid-Body Contact Models: Case Study of Planar Impact
Nima Fazeli, MIT; Samuel Zapolsky; Evan Drumwright; Alberto Rodriguez, MIT

Emergent behaviors in mixed-autonomy traffic
Cathy Wu, UC Berkeley; Abdul Kreidieh, UC Berkeley; Eugene Vinitsky, UC Berkeley; Alexandre Bayen, UC Berkeley

How Robots Learn to Classify New Objects Trained from Small Data Sets
Tick Son Wang, DLR; Zoltan-Csaba Marton, German Aerospace Center (DLR); Manuel Brucker, DLR; Rudolph Triebel

DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations
Sanjay Krishnan, UC Berkeley; Roy Fox, UC Berkeley; Ion Stoica, UC Berkeley; Ken Goldberg, UC Berkeley

Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments
John Martin, Stevens Institute of Technolog; Brendan Englot, Stevens Institute of Technology

Learning Human Utility from Video Demonstrations for Deductive Planning in Robotics
Nishant Shukla, UCLA; Song-Chun Zhu, UCLA; Frank Chen, UCLA; Yunzhong He, UCLA

Bayesian Hilbert Maps for Dynamic Continuous Occupancy Mapping
Ransalu Senanayake, The University of Sydney; Fabio Ramos, The University of Sydney

MBMF: Model-Based Priors for Model-Free Reinforcement Learning
Somil Bansal, UC Berkeley; Roberto Calandra, UC Berkeley; Sergey Levine, UC Berkeley; Claire Tomlin, UC Berkeley

Learning Dynamics Across Similar Spatiotemporally Evolving Systems
Joshua Whitman, UIUC; Girish Chowdhary, UIUC

Reverse Curriculum Generation for Robotic Manipulation with Reinforcement Learning
Carlos Florensa, UC Berkeley; David Held, UC Berkeley; Pieter Abbeel, UC Berkeley

Fastron: An Online Learning-Based Model and Active Learning Strategy for Proxy Collision Detection
Nikhil Das, UC San Diego; Naman Gupta, UC San Diego; Michael Yip, UC San Diego

Gradient-free policy architecture search and adaptation
Sayna Ebrahimi, UC Berkeley; Anna Rohrbach, Max Plank Institute for Informatics; Trevor Darrell, UC Berkeley

Learning Deep Policies for Robot Bin Picking using Discrete-Event Simulation of Robust Grasping Sequences
Jeffrey Mahler, UC Berkeley; Ken Goldberg, UC Berkeley

Harvesting common-sense navigational knowledge forrobotics from uncurated text corpora
Nancy Fulda, BYU PCCL; Zachary Brown, BYU PCCL; Nathan Tibbetts, BYU PCCL

Non-Archival Track

Seeing the Force: Integrating Poses and Visually Latent Forces forManipulations through Fluent Discovery and Imitation Learning
Feng Gao; Mark Edmonds, ; Xu Xie, UCLA; Hangxin Liu, University of California, Los ; Siyuan Qi, UCLA; Yixin Zhu, UCLA; Brandon Rothrock; Song-Chun Zhu, UCLA

Long-Term On-Board Prediction of Pedestrians in Traffic Scenes
Apratim Bhattacharyya, MPI Informatics; Mario Frtiz, MPI Informatics; Bernt Schiele

Predictive-State Decoders: Augmenting Recurrent Networks for Better Filtering, Imitation, and Reinforcement Learning
Arun Venkatraman, Carnegie Mellon University; Nick Rhinehart, Carnegie Mellon University; Wen Sun, Carnegie Mellon University; Byron Boots, Georgia Institute of Technology; Kris Kitani, Carnegie Mellon University; Drew Bagnell, Carnegie Mellon University

Safe Model-based Reinforcement Learning with Stability Guarantees
Felix Berkenkamp, ETH Zurich; Matteo Turchetta, ETH Zurich; Angela Schoellig, University of Toronto; Andreas Krause, ETH Zurich

QMDP-Net: Deep Learning for Planning under Partial Observability
Peter Karkus*, National University of Singapo; David Hsu, NUS; Wee Sun Lee, NUS

Neural Task Programming: Learning to Generalize Across Hierarchical Tasks
Danfei Xu, Stanford University; Yuke Zhu, 
Stanford University
; Yuan Gao, Stanford University; Animesh Garg, Stanford University; Li Fei-Fei, 
Stanford University
; Silvio Savarese, Stanford University

Most Likely Expected Improvement for Automatic Prior Selection in Data-Efficient Direct Policy Search
Rémi Pautrat, Inria Nancy - Grand-Est; Konstantinos Chatzilygeroudis, Inria Nancy Grand-Est; Jean-Baptiste Mouret, INRIA

Learning Time Invariant Driver Styles with Burn-InfoGAIL
Alex Kuefler, Stanford University; Mykel Kochenderfer, Stanford University

Learning to Fly by Crashing
Dhiraj Gandhi, 
Carngie Mellon University
; Lerrel Pinto, Carngie Mellon University; Abhinav Gupta, 
Carngie Mellon University

Action Learning for Multi-Agent Navigation
Julio Godoy, Universidad de Concepción; Tiannan Chen, University of Minnesota; Stephen Guy, University of Minnesota; Ioannis Karamouzas, Clemson University; Maria Gini, University of Minnesota

Deep Neural Networks as Add-on Modules for High-Accuracy Impromptu Trajectory Tracking
SiQi Zhou, University of Toronto; Mohamed Helwa, University of Toronto; Angela Schoellig, University of Toronto

Predictive State Models for Prediction and Control in Partially Observable Environments
Ahmed Hefny, Carnegie Mellon University; Zita Marinho, Carnegie Mellon University; Wen Sun, Carnegie Mellon University; Carlton Downey, Carnegie Mellon University; Goeffrey Gordon, Carnegie Mellon University

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Ofir Nachum, Google; Mohammad Norouzi, Google; Kelvin Xu, Google; Dale Schuurmans, Google

IntentionGAN: Multi-Task Imitation Learning from Unstructured Demonstrations
Karol Hausman, USC; Yevgen Chebotar, USC; Stefan Schaal, USC; Gaurav Sukhatme, USC; Joseph Lim, USC

Manifold Regularization for Kernelized LSTD
Xinyan Yan, Google; Krzysztof Choromanski, ; Byron Boots, 
Georgia Institute of Technology
; Vikas Sindhwani, Google

Predictive State Recurrent Neural Networks
Ahmed Hefny, Carnegie Mellon University; Carlton Downey*, Carnegie Mellon University; Byron Boots, Georgia Institute of Technology; Goeffrey Gordon, Carnegie Mellon University

Bodily aware soft robots: integration of proprioceptive and exteroceptive sensors
Gabor Soter, University of Bristol; Jonathan Rossiter, University of Bristol; Helmut Hauser, University of Bristol; Andrew Conn, University of Bristol

Towards Grasp Transfer using Shape Deformation
Andrey Kurenkov, Stanford University; Viraj Mehta, Stanford University; Jingwei Ji, Stanford University; Animesh Garg, Stanford University; Silvio Savarese, Stanford University
Comments