Local posts and writing from Interconnects.
On natolambert.com
2022
2021
Interconnects
2025
- 2025.12.18 2025 Interconnects year in review
- 2025.12.18 Open models: Hot or Not with Nathan Lambert & Florian Brand
- 2025.12.14 2025 Open Models Year in Review
- 2025.12.10 New Talk: Building Olmo 3 Think
- 2025.11.23 Latest open artifacts (#16): Who's building models in the U.S., China's model release playbook, and a resurgence of truly open models
- 2025.11.20 Olmo 3: America's truly open reasoning models
- 2025.11.16 Why AI writing is mid
- 2025.11.12 Interview: Ant Group's open model ambitions
- 2025.11.10 Opening the black box of character training
- 2025.11.06 5 Thoughts on Kimi K2 Thinking
- 2025.10.25 Burning out
- 2025.10.20 How to scale RL
- 2025.10.18 Latest open artifacts (#15): It's Qwen's world and we get to live in it, on CAISI's report, & GPT-OSS update
- 2025.10.16 The State of Open Models
- 2025.10.07 Thoughts on The Curve
- 2025.09.30 ChatGPT: The Agentic App
- 2025.09.22 Thinking, Searching, and Acting
- 2025.09.18 Coding as the epicenter of AI progress and the path to general agents
- 2025.09.11 Latest open artifacts (#14): NVIDIA's rise, "Swiss & UAE DeepSeek," and a resurgence of open data
- 2025.09.09 On China's open source AI trajectory
- 2025.08.17 Ranking the Chinese Open Model Builders
- 2025.08.15 Contra Dwarkesh on Continual Learning
- 2025.08.11 Latest open artifacts (#13): The abundance era of open models
- 2025.08.10 What I've been reading (#2): More on Kimi K2, how to build a bad research center, Pretraining with RL, and sporks of AGI
- 2025.08.07 GPT-5 and the arc of progress
- 2025.08.05 gpt-oss: OpenAI validates the open ecosystem (finally)
- 2025.08.04 Towards American Truly Open Models: The ATOM Project
- 2025.07.29 Interviewing Ross Taylor on the state of AI: Chinese open models, scaling reasoning, useful tools, and what comes next
- 2025.07.23 The White House's plan for open models & AI research in the U.S.
- 2025.07.22 Latest open artifacts (#12): Chinese models continue to dominate throughout the summer
- 2025.07.14 Kimi K2 and when "DeepSeek Moments" become normal
- 2025.07.12 xAI's Grok 4: The tension of frontier performance with a side of Elon favoritism
- 2025.07.04 The American DeepSeek Project
- 2025.06.28 Ilya on deep learning in 2015
- 2025.06.27 Interviewing Dean Ball on AI policy: CA SB 1047, upcoming AI disaster response, Llama 3 405B, Chinese open-source AI, and scaling laws
- 2025.06.26 Latest open artifacts (#11): Visualizing China's open models market share, Arcee's models, and VLAs for robotics
- 2025.06.23 Some ideas for what comes next
- 2025.06.21 What I've been reading (#1)
- 2025.06.18 Crafting a good (reasoning) model
- 2025.06.12 The rise of reasoning machines
- 2025.06.09 What comes next with reinforcement learning
- 2025.06.06 How I Write
- 2025.06.04 A taxonomy for next-generation reasoning models
- 2025.05.29 Latest open artifacts (#10): New DeepSeek R1 0528!, more permissive licenses, everything as a reasoner, and from artifacts to agents
- 2025.05.27 Claude 4 and Anthropic's bet on code
- 2025.05.27 Reinforcement learning with random rewards actually works with Qwen 2.5
- 2025.05.21 People use AI more than you think
- 2025.05.14 My path into AI
- 2025.05.06 What people get wrong about the leading Chinese open models: Adoption and censorship
- 2025.05.04 Sycophancy and the art of the model
- 2025.04.30 State of play of AI progress (and related brakes on an intelligence explosion)
- 2025.04.28 Qwen 3: The new open standard
- 2025.04.28 Transparency and (shifting) priority stacks
- 2025.04.21 The latest open artifacts (#9): RLHF book draft, where the open reasoning race is going, and unsung heroes of open LM work
- 2025.04.19 OpenAI's o3: Over-optimization is back and weirder than ever
- 2025.04.14 OpenAI's GPT-4.1 and separating the API from ChatGPT
- 2025.04.07 Llama 4: Did Meta just push the panic button?
- 2025.04.05 RL backlog: OpenAI's many RLs, clarifying distillation, and latent reasoning
- 2025.03.31 Recent reasoning research: GRPO tweaks, base model RL, and data curation
- 2025.03.30 GPT-4o's images and lessons from native input-output multimodality
- 2025.03.26 Gemini 2.5 Pro and Google's second chance with AI
- 2025.03.20 The latest open artifacts (#8): The return of ~30B models, side effects of OpenAI's proposed DeepSeek ban, and yet another reasoning roundup
- 2025.03.19 Managing frontier model training organizations (or teams)
- 2025.03.13 Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
- 2025.03.12 Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL
- 2025.03.10 Elicitation, the simplest way to understand post-training
- 2025.03.05 Where inference-time scaling pushes the market for AI companies
- 2025.02.28 GPT-4.5: "Not a frontier model"?
- 2025.02.26 Character training: Understanding and crafting a language model's personality
- 2025.02.24 Claude 3.7 thonks and what's next for inference-time scaling
- 2025.02.19 The latest open artifacts (#7): Alpaca era of reasoning models, China's continued dominance, and tons of multimodal advancements
- 2025.02.18 Grok 3 and an accelerating AI roadmap
- 2025.02.13 An unexpected RL Renaissance
- 2025.02.12 Deep Research, information vs. insight, and the nature of science
- 2025.02.05 Making the U.S. the home for open-source AI
- 2025.01.28 Why reasoning models will generalize
- 2025.01.27 The latest open artifacts (#6): Reasoning models, China's lead in open-source, and a growing multimodal space
- 2025.01.22 Interviewing OLMo 2 leads: Open secrets of training language models
- 2025.01.21 DeepSeek R1's recipe to replicate o1 and the future of reasoning LMs
- 2025.01.15 Let me use my local LMs on Meta Ray-Bans
- 2025.01.09 DeepSeek V3 and the actual cost of training frontier AI models
- 2025.01.08 The state of post-training in 2025
- 2025.01.02 Quick recap on the state of reasoning
Note: Only recent posts from Interconnects are shown above due to RSS feed limitations. Visit Interconnects for the complete archive.