Low Level Control of a Quadrotor with Deep Model-Based Reinforcement Learning

ABSTRACT:
Designing effective low-level robot controllers often entail platform-specific implementations that require manual heuristic parameter tuning, significant system knowledge, or long design times. With the rising number of robotic and mechatronic systems deployed across areas ranging from industrial automation to intelligent toys, the need for a general approach to generating low-level controllers is increasing. To address the challenge of rapidly generating low-level controllers, we argue for using model-based reinforcement learning (MBRL) trained on relatively small amounts of automatically generated (i.e., without system simulation) data. In this paper, we explore the capabilities of MBRL on a Crazyflie centimeter-scale quadrotor with rapid dynamics to predict and control at <50Hz. To our knowledge, this is the first use of MBRL for controlled hover of a quadrotor using only on-board sensors, direct motor input signals, and no initial dynamics knowledge. Our controller leverages rapid simulation of a neural network forward dynamics model on a GPU-enabled base station, which then transmits the best current action to the quadrotor firmware via radio. In our experiments, the quadrotor achieved hovering capability of up to 6 seconds with 3 minutes of experimental training data.

Takeaways

  1. Model-based RL with simple sampling mechanisms can work on high cost-per-test and rapid dynamical systems.
  2. The question often becomes how can you get the right data without breaking the robot, rather than what can I do with the data.
  3. More work is needed in low computation control (this uses a Nvidia 1080 GPU online) and safe exploration (MBRL’s exploration mechanism is weird).

Citation

@ARTICLE{8769882,
author={Nathan O. {Lambert} and D. S. {Drew} and J. {Yaconelli} and S. {Levine} and R. {Calandra} and K. S. J. {Pister}},
journal={IEEE Robotics and Automation Letters},
title={Low-Level Control of a Quadrotor With Deep Model-Based Reinforcement Learning},
year={2019},
volume={4},
number={4},
pages={4224-4230},
keywords={Data models;Vehicle dynamics;Robots;Pulse width modulation;Attitude control;Trajectory;Predictive models;Deep learning in robotics and automation;aerial systems: mechanics and control},
doi={10.1109/LRA.2019.2930489},
ISSN={2377-3766},
}