Learning Generalizable Locomotion Skills with Hierarchical Reinforcement Learning

Download the paper!Read on ArXiv!Run the code!Video available!
ABSTRACT:
hide & show ↓↑
Learning to locomote to arbitrary goals on hardware remains a challenging problem for reinforcement learning. In this paper, we present a hierarchical learning framework that improves sample-efficiency and generalizability of locomotion skills on real-world robots. Our approach divides the problem of goal-oriented locomotion into two sub-problems: learning diverse primitives skills, and using model-based planning to sequence these skills. We parametrize our primitives as cyclic movements, improving sample-efficiency of learning on a 18 degrees of freedom robot. Then, we learn coarse dynamics models over primitive cycles and use them in a model predictive control framework. This allows us to learn to walk to arbitrary goals up to 12m away, after about two hours of training from scratch on hardware. Our results on a Daisy hexapod hardware and simulation demonstrate the efficacy of our approach at reaching distant targets, in different environments and with sensory noise.

What you need to know:

  1. Learning to control even well-made / professional systems is still very hard (this task is pretty doable with non learning methods).
  2. Integrating model-free and model-based methods has very, very promising signs. Model-free for fine tuning, model-based for high level planning and reasoning.

Citation

@INPROCEEDINGS{9196642,
 author={T. {Li} and N. {Lambert} and R. {Calandra} and F. {Meier} and A. {Rai}},
 booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},
 title={Learning Generalizable Locomotion Skills with Hierarchical Reinforcement Learning},
 year={2020},
 volume={},
 number={},
 pages={413-419},
 doi={10.1109/ICRA40945.2020.9196642}

}