June 2021: Teams versus Emergence; Goals versus Ideas
Greetings from Albany, NY.
There are two key pressures that can drive research:
- Goals: Management directed targets, and
- Ideas: Bottom-up emergent research.
Academic institutions definitely fall towards emergence of ideas naturally, where individuals define paths and follow their curiosity (funding detours aside). Industrial counterparts hire researchers to solve their most pressing problems, and naturally fall on the targeted side of research. Modern academia is messing things up by making the number of ideas you come up with the central goal, which should never be a research priority.
In order to attract top academics, tech companies have been lowering pressure on their researchers to contribute directly to progress of core products. Microsoft Research (MSR) is known to have been earliest to this game, investigating the foundations of computing for 30 years. Now, Google and Facebook are the dominant players in re-defining the behavior of large industrial labs.
Google’s deep learning devision, Google Brain, was founded in the height of the deep learning revolution (covered well in Genius Makers) and now is one part of a larger Google AI organization. Facebook AI was founded ultimately in an effort to close the gap to Google and compete for talent and results. There is an open debate as to if Facebook has created a more open and appealing research organization, but Google is by far and away the most proven at translating said research into products (recent example, BERT). Working at Facebook Research seems disorganized, which may feel good for academics who want to shop around their ideas, but it may not be the optimal solution for achieving scientific advancement (especially for these companies who can set their targets independant of academic-norms, such as DeepMind’s goals for impact).
Something that does not come up a lot when discussing Google AI, Facebook AI, Open AI, etc. among my colleagues at Berkeley is the role of research teams. DeepMind, seems to have nailed that on the head. By finding the perfect balance between management and team-driven research, it seems like an environment where researchers have freedom, yet feel as if they are contributing to larger, company-driven objectives. There is a separation between teams and company goals. Researchers interact with and feel close with their team, but can also slot into bigger goals.
This excerpt on DeepMind’s differences highlight it well, and forego me any risk of saying anything internal (this post on AlphaFold, see part Why DeepMind?):
Resources also helped and this is not to be underestimated, but I would like to focus on organizational structure as I believe it is the key factor beyond the individual contributors themselves. DeepMind is organized very differently from academic groups. There are minimal administrative requirements, freeing up time to do research. This research is done by professionals working at the same job for years and who have achieved mastery of at least one discipline. Contrast this with academic labs where there is constant turnover of students and postdocs. This is as it should be, as their primary mission is the training of the next generation of scientists. Furthermore, at DeepMind everyone is rowing in the same direction. There is a reason that the AF2 abstract has 18 co-first authors and it is reflective of an incentive structure wholly foreign to academia. Research at universities is ultimately about individual effort and building a personal brand, irrespective of how collaborative one wants to be. This means the power of coordination that DeepMind can leverage is never available to academic groups. Taken together these factors result in a “fast and focused” research paradigm.
For some context, I started a new research internship at DeepMind, and it truly seems like a different and magical place for research (with huge upfront time costs). It also reminds me how much easier it is for me to be fulfilled in work with a team, rather than feeling the constant comparison pressure of normal academic processes. I’ll be doing research on robotics, reinforcement learning, and related fields, so not a big change. I’m getting my feet under me and it feels like it will be a magical experience.
I have also put some more blog posts out in the last month:
- Reward is not enough: Multi-agent scenarios make reward maximization a risk. Discussing when, rather than if, we should believe in the Reward Hypothesis.
- How all machine learning becomes reinforcement learning: I make the case why people iteratively training any model should learn some core concerns of reinforcement learning.
I have been reading Infinite Powers by Steven Strogatz and really appreciating having a job where understanding, enjoying, and utilizing diverse forms of math is a core principal.
See you next month! Search for more!