February 2021: Home here, home at home

This is my first update here, thank you for subscribing. It’s been a really busy couple months to start the year, so more work will be coming out soon. I still have a couple blog posts and papers to share with you that are results of a great fall.

This is sent from where I grew up in Rhode Island. I am very happy to be able to work around the country during such challenging times. I hope all of my readers are doing well, and I look forward to continuing to connect with more of you (mostly on Twitter).

Some things I am thinking about (reach out if you are too):

  • How the three histories of RL (optimal control, neuroscience, and search) impact its current trajectory.
  • The relation between feedback and intelligence.
  • How to use sample-based control on learned models (dealing with noise and exploitation).
  • How to scale multi-agent simulators to tons of agents (1000s) — most research is <20 for now. “Swarm” is used in a misleading way.

Some done things:


  • AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks: my first academic content at the interface of automation with society. We are trying to design a better terminology for discussing the new scales of risk (the follow on to this work is already under review!)
  • This preprint is appearing on Arxiv tomorrow: On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning. The TL;DR of it is that using AutoML with RL is so strong that it breaks many of the simulators that we use as “baselines” today. RL needs a new set of common challenges so papers are not just graduate student with tweezers optimizers.


  • My only recent personal blog was on graphic design for graduate students (and any technical employee). I’m definitely going to write more on what graduate school really selects on optimizes for and all things that can help future students, so stay tuned.

Democratizing Automation

My Substack has been continuing to grow (finally in 200+ club). It is mostly covering reinforcement learning (RL) and robotics recently, with more on RL coming soon.

See you next month!