From Human-Robot Interaction to Human-Robot Integration
Monday, November 16 (07:15 - 08:00 PST)
The health emergency caused by the Covid-19 pandemic has given new urgency to the long-standing objective of building machines which can help people carry out their physical work safely - even in environments which, once familiar, have suddenly become inaccessible or potentially hostile.
Fortunately, recent research advancements in the field of robotics have made it possible not only to have machines that approach or beat the computational intelligence of humans, but are also capable of ever more natural motion and exploit the "physical" intelligence embodied in their structure. Informed by neuroscientific models of human behavior in interaction with the physical world, new robots can safely touch humans and the environment to physically act on it. New sensing and display tools make it possible for other senses than just vision to share information on the world between a robot and a human. The union of such technologies, together with a deeper understanding of how to interface humans and machines, is enabling a new relationship between humans and robots, that is really more an integration than an interaction in the classical sense.
We will consider examples of partial integration, as in prosthetics and rehabilitation, augmentation with exoskeletons and supernumerary limbs, and shared-autonomy robotic avatars, with the robot executing the human's intended actions and the human perceiving the context of his/her actions and their consequences. Finally, we will discuss how human-robot integration could be leveraged to program the machine through "body language'' instructions which even robotics-naive users could intuitively give.
Session Chair: Jens Kober
Antonio Bicchi (Show Bio)
Antonio Bicchi is a scientist interested in robotics and intelligent machines. After graduating in Pisa and receiving a Ph.D. from the University of Bologna, he spent a few years at the MIT AI Lab of Cambridge before becoming Professor in Robotics at the University of Pisa. In 2009 he founded the Soft Robotics Laboratory at the Italian Institute of Technology in Genoa. Since 2013 he is Adjunct Professor at Arizona State University, Tempe, AZ. He has coordinated many international projects, including four grants from the European Research Council (ERC). He launched pioneering initiatives like the WorldHaptics conference, the major conference on natural and artificial touch, and the IEEE Robotics and Automation Letters, today the largest Journal in the field. He is currently the President of the Italian Institute of Robotics and Intelligent Machines.
He has authored over 500 scientific papers cited more than 24,000 times. He supervised over 60 doctoral students and more than 20 postdocs, most of whom are now professors in universities and international research centers. His students have received prestigious awards, including two first prizes and two nominations for the best theses in Europe on robotics and haptics. He is a Fellow of IEEE since 2005. In 2018 he received the prestigious IEEE Saridis Leadership Award.
Walking the Boundary of Learning and Interaction
Tuesday, November 17 (07:00 - 07:45 PST)
There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. This includes autonomous vehicles that interact with human-driven vehicles or pedestrians, service robots collaborating with their users at homes over short or long periods of time, or assistive robots helping patients with disabilities. This introduces an opportunity for developing new robot learning algorithms that can help advance interactive autonomy.
In this talk, I will discuss a formalism for human-robot interaction built upon ideas from representation learning. Specifically, I will first discuss the notion of latent strategies— low dimensional representations sufficient for capturing non-stationary interactions. I will then talk about the challenges of learning such representations when interacting with humans, and how we can develop data-efficient techniques that enable actively learning computational models of human behavior from demonstrations, preferences, or physical corrections. Finally, I will introduce an intuitive controlling paradigm that enables seamless collaboration based on learned representations, and further discuss how that can be used for further influencing humans.
Session Chair: Claire Tomlin
Dorsa Sadigh (Show Bio)
Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University. Her research interests lie in the intersection of robotics, learning, and control theory. Specifically, she is interested in developing algorithms for safe and adaptive human-robot interaction. Dorsa has received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and has received her bachelor’s degree in EECS from UC Berkeley in 2012. She is awarded the NSF CAREER award, the AFOSR Young Investigator award, the IEEE TCCPS early career award, the Google Faculty Award, and the Amazon Faculty Research Award.
Integrating Planning and Learning for Scalable Robot Decision Making
Wednesday, November 18 (07:00 - 07:45 PST)
To become intelligent and effective human helpers, robots must make complex decisions in highly dynamic and interactive human environments: driving through a crowded intersection with many pedestrians and vehicles, or picking up a coffee mug in a pile of different objects. Planning enables the robot to reason about the consequences of its actions far into the future, based on a prior model of the environment. However, it eventually succumbs to the combinatorial complexity of reasoning. Learning builds up past experiences to identify commonality among similar tasks and exploits the "remembered" solutions, but the challenge is then generalization to new, unseen tasks. In this talk, we will look at several ideas that try to fuse planning and learning by (i) inserting learning into planning, (ii) inserting planning into learning, and (iii) stacking them up in a hierarchy. In doing so, we achieve improved efficiency for planning, improved generalization for learning, and most importantly, scalable and generalizable robot decision making.
Session Chair: Fabio Ramos
David Hsu (Show Bio)
David Hsu is a professor of computer science at the National University of Singapore (NUS). He received PhD in computer science from Stanford University. At NUS, he co-founded NUS Advanced Robotics Center and has been serving as the Deputy Director. He is an IEEE Fellow.
His research spans robotics and AI. In recent years, he has been working on robot planning and learning under uncertainty for human-centered robots. He, together with colleagues and students, won the Humanitarian Robotics and Automation Technology Challenge Award at International Conference on Robotics & Automation (ICRA) 2015, the RoboCup Best Paper Award at International Conference on Intelligent Robots & Systems (IROS) 2015, and the Best Systems Paper Award at Robotics: Science & Systems (RSS), 2017. He has chaired or co-chaired several major international robotics conferences, including International Workshop on the Algorithmic Foundation of Robotics (WAFR) 2004 and 2010, ICRA 2016, and RSS 2015. He currently serves on the editorial board of International Journal of Robotics Research (IJRR).