Investigating Compounding Prediction Errors in Learned Dynamics Models

Download the paper!Read on ArXiv!Run the code!Video available!
ABSTRACT:
hide & show ↓↑
Accurately predicting the consequences of agents’ actions is a key prerequisite for planning in robotic control. Model-based reinforcement learning (MBRL) is one paradigm which relies on the iterative learning and prediction of state-action transitions to solve a task. Deep MBRL has become a popular candidate, using a neural network to learn a dynamics model that predicts with each pass from high-dimensional states to actions. These “one-step” predictions are known to become inaccurate over longer horizons of composed prediction – called the compounding error problem. Given the prevalence of the compounding error problem in MBRL and related fields of data-driven control, we set out to understand the properties of and conditions causing these long- horizon errors. In this paper, we explore the effects of subcomponents of a control problem on long-term prediction error: including choosing a system, collecting data, and training a model. With this analysis of compounding error, we delineate a set of key considerations for practitioners to understand the potential errors when modeling a new system. These detailed quantitative studies on simulated and real-world data show that the underlying dynamics of a system are the strongest factor determining the shape and magnitude of prediction error. Given a clearer understanding of compounding prediction error, researchers can implement new types of models beyond “one-step” that are more useful for control.

What you need to know:

Citation