May 04, 2023
Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -- policy pretraining, targeted exploration, and oversampling of demonstration data -- which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 150%-250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100K interaction steps, 5 demonstrations).
Publisher
ICLR
June 18, 2023
Vincent-Pierre Berges, Andrew Szot, Devendra Singh Chaplot, Aaron Gokaslan, Dhruv Batra, Eric Undersander
June 18, 2023
March 31, 2023
Ram Ramrakhya, Dhruv Batra, Erik Wijmans, Abhishek Das
March 31, 2023
March 29, 2023
Franziska Meier, Aravind Rajeswaran, Dhruv Batra, Jitendra Malik, Karmesh Yadav, Oleksandr Maksymets, Sergio Arnaud, Sneha Silwal, Vincent-Pierre Berges, Aryan Jain, Claire Chen, Jason Ma, Yixin Lin
March 29, 2023
March 29, 2023
Akshara Rai, Alexander William Clegg, Dhruv Batra, Eric Undersander, Naoki Yokoyama, Sehoon Ha
March 29, 2023
Who We Are
Our Actions
Newsletter