Creating Interactive Haptic Robots
Monday, November 16 (09:00 - 10:15 PST)
Washing the breakfast dishes, packing your bag, and hugging your partner goodbye are everyday tasks requiring little effort for healthy adults. You might thus guess that these tasks will also be easy for modern robots, but they are not! Unfortunately, engineering excellent robotic systems is genuinely difficult, especially if they need to touch things you value (like your dishes or your spouse). I became fascinated with physically interactive mechatronic systems at the start of graduate school and have spent the last twenty years doing research on haptic interfaces, teleoperation, tactile perception, and human-robot interaction. This tutorial will share key insights, strategies, and tools that I have come to value over this journey, touching on mechanical, electrical, computational, and social aspects. I hope my recommendations will help inspire and empower you to join in this exciting quest to create interactive haptic robots.
Session Chair: Manuel Lopes
Katherine J. Kuchenbecker (Show Bio)
Katherine J. Kuchenbecker directs the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She earned her Ph.D. at Stanford University in 2006, did postdoctoral research at the Johns Hopkins University, and was an engineering professor at the University of Pennsylvania before moving to the Max Planck Society in 2017. She delivered a TEDYouth talk on haptics in 2012 and has been honored with a 2009 NSF CAREER Award, the 2012 IEEE RAS Academic Early Career Award, a 2014 Penn Lindback Award for Distinguished Teaching, and various best paper and best demonstration awards. She co-chaired the IEEE RAS Technical Committee on Haptics from 2015 to 2017 and the IEEE Haptics Symposium in 2016 and 2018.
Safe Reinforcement Learning
Tuesday, November 17 (09:00 - 10:15 PST)
While we have seen remarkable breakthroughs in reinforcement learning in recent years, significant challenges remain before these approaches can be safely deployed in high-stakes real world domains. In this tutorial, I will provide an overview of some approaches designed to make progress towards this goal. An emphasis will be on techniques that combine nonparametric learning with methods from robust optimization and control, as well as formal verification. Under certain conditions, these approaches enable exploration to improve performance over time, while provably satisfying certain safety and stability properties.
Session Chair: Melanie Zeilinger
Andreas Krause (Show Bio)
Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning & Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center and Chair of the ETH AI Center. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received ERC Starting Investigator and ERC Consolidator grants, the Deutscher Mustererkennungspreis, an NSF CAREER award as well as the ETH Golden Owl teaching award. His research has received awards at several premier conferences and journals, including the ACM SIGKDD Test of Time award 2019 and the ICML Test of Time award 2020. Andreas Krause served as Program Co-Chair for ICML 2018 and is serving as Action Editor for the Journal of Machine Learning Research.
A Fabrics Perspective on Nonlinear Behavior Representation
Wednesday, November 18 (09:00 - 10:15 PST)
In this tutorial, we review methods of reactive control ranging from standard workhorses such as operational space control and geometric control to new models derived from a recent encompassing theory of optimization fabrics. We discuss in detail the relationship between these models and the nonlinear geometries that govern them; we take a tour of generalized nonlinear (semi-spray) geometry, Finsler geometry, and the special case of Riemannian geometry to observe how their modeling assumptions shape the derived controllers. We’ll see that operational space and geometric control, as well as a recent model called geometric dynamical system (GDS), can all be considered what are called Lagrangian fabrics and thereby each inherit the structure and properties of that encompassing class. To contrast, we derive a broader class of fabrics called geometric fabrics that is more flexible in its modeling capacity and overcomes the limitations of Lagrangian fabrics. We demonstrate the utility of geometric fabrics for modular behavioral design and use them as an empirical model to demonstrate how fabric design can promote strongly generalizing systems (systems designed on a small collection of training problems that exhibit strong generalization to entire distributions of problems encountered at deployment). We end by discussing how fabrics act as an efficient encoding medium overcoming the problematic decoding complexity of standard cost function representations on the types of challenging higher-dimensional problems encountered in collaborative settings.
Session Chair: Animesh Garg
Nathan Ratliff (Show Bio)
Nathan Ratliff is a distinguished research scientist at NVIDIA studying behavior representation and robotic systems. He received his PhD in Robotics from Carnegie Mellon University under Prof. J. Andrew Bagnell, working closely with Prof. Siddhartha Srinivasa. He has worked at the Toyota Technological Institute in Chicago, Intel Labs, Google, Amazon, Max Planck, and the University of Stuttgart. Before joining NVIDIA he co-founded Lula Robotics where he and his co-founders developed the robotic system that now drives many of NVIDIA’s research platforms.