Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

Download the paper!Read on ArXiv!Run the code!Video available!
ABSTRACT:
hide & show ↓↑
in artificial intelligence (AI). RL formulates intelligence as a generic learning problem that, if solved, promises to match or exceed human capabilities. Current research and industrial applications of RL include social media recommendations, game-playing, traffic modeling, clinical health trials, and electric power utilities, among many other open-ended problems. In the long term, RL is considered by many AI theorists to be the most promising path to artificial general intelligence. This places RL practitioners in a position to design systems that have never existed before and lack prior documentation in law and policy. This is both an exciting and frightening development. RL may make it possible to optimize how traffic moves through cities, how social media users consume and share content, and how health administrators oversee medical interventions, among many other applications. Public agencies could intervene on complex dynamics that were previously too opaque to deliberate about, and long-held policy ambitions would finally be made tractable. In this whitepaper we illustrate this potential and how it might be technically enacted in the domains of energy infrastructure, social media recommender systems, and transportation. Alongside these unprecedented interventions come new forms of risk that exacerbate the harms already generated by standard machine learning tools. We correspondingly present a new typology of risks arising from RL design choices, falling under four categories: scoping the horizon, defining rewards, pruning information, and training multiple agents. Beyond traditional machine learning (ML), there are two reasons why these risks constitute novel challenges for public policy. First, in ML, the primary risks have to do with outputs that a model generates (e.g., whether a model makes fair decisions). In RL, however, the risks come from the initial specification of the task (e.g., whether learning “good” behavior for automated vehicles also requires an initial definition of good traffic flow). Addressing these risks will require ex ante design considerations, rather than exclusively ex post evaluations of behavior. Second, RL is widely used as a metaphor for replacing human judgment with automated decision-making. Even if end-to-end RL is not yet feasible, companies’ strategic deployment of ML algorithms can be viewed as “human-driven RL.” Whether the decision-maker is a machine, a human engineer, or a combination thereof, RL serves as a window into the wider forms of automation now pursued by technology firms and the risks those forms introduce.

What you need to know:

Citation

@article{gilbert2021reward,
 title={Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems},
 author={Gilbert, Thomas and Dean, Sarah and Zick, Tom and Lambert, Nathan},
 journal={Center for Long-term Cybersecurity},
 year={2022}
}