July 17, 2022
In this paper, we explore how we can endow robots with the ability to learn correspondences between their own skills, and those of agents with different embodiments and in different domains than their own, in an entirely unsupervised manner. Our insight and premise is that agents with different embodiments use similar strategies (high-level skill sequences) to solve similar tasks. Based on this insight, we frame learning skill correspondences as a problem of matching distributions of sequences of skills across agents. We then present an unsupervised objective that encourages a learnt skill translation model to match these distributions across domains inspired by recent advances in unsupervised machine translation. Our approach is able to learn semantically meaningful correspondences between skills across multiple robot-robot and human-robot domain pairs, despite being completely unsupervised. Further, the learnt correspondences enable the transfer of task strategies across robots and domains. Dynamic visualization of our results can be found here: https://sites.google.com/view/ translatingrobotskills/home
Written by
Stuart Anderson
Aravind Rajeswaran
Vikash Kumar
Yixin Lin
Jean Oh
Tanmay Shankar
Publisher
ICML
Research Topics
Robotics
Foundational models
Latest news
Foundational models