Nathan is a robot learning researcher, writer, non-professional athlete, and a mental-health advocate.

There really is too much noise.

I do my best to only contribute high signal content on machine learning, human optimization, and the nature of life.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.


The last reliable path into AI

A confluence of trends leaves AI+something, rather than pure AI, as the last great path into machine learning research.


ML/RL & Microrobotics

A memo I wrote to my research group on the open questions when applying machine learning to another research area: novel microrobotics.


Lifelong Learning 2021

What I have been learning from recently.

More musings →


Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

Learn more.
We detail why reinforcement learning systems pose a different type of (dynamic) risks to society. This paper outlines the different types of feedback present in RL systems, the risks they pose, and a path forward for policymakers.

The Challenges of Exploration for Offline Reinforcement Learning

We flip the script on Offline RL research and ask the question of "what is the best dataset to collect?" rather than "what is the best algorithm?"

Investigating Compounding Prediction Errors in Learned Dynamics Models

Learn more.
In this paper we set out to understand the causes of compounding prediction errors in one-step learned models. With this, we hope a next generation of models can be used to improve model-based reinforcement learning.
More papers →