-
1
How open model ecosystems compound
↗
-
2
Notes from inside China's AI labs
↗
-
3
The distillation panic
↗
-
4
Reading today's open-closed performance gap
↗
-
5
My bets on open models, mid-2026
↗
-
6
What I’ve been building: ATOM Report, post-training course, finishing my book, and ongoing research
↗
-
7
The inevitable need for an open model consortium
↗
-
8
Claude Mythos and misguided open-weight fearmongering
↗
-
9
Gemma 4 and what makes an open model succeed
↗
-
10
Latest open artifacts (#20): New orgs! New types of models! With Nemotron Super, Sarvam, Cohere Transcribe, & others
↗
-
11
Lossy self-improvement
↗
-
12
GPT 5.4 is a big step for Codex
↗
-
13
What comes next with open models
↗
-
14
Dean Ball on open models and government control
↗
-
15
Olmo Hybrid and future LLM architectures
↗
-
16
Latest open artifacts (#19): Qwen 3.5, GLM 5, MiniMax 2.5 — Chinese labs' latest push of the frontier
↗
-
17
How much does distillation really matter for Chinese LLMs?
↗
-
18
Open models in perpetual catch-up
↗
-
19
Opus 4.6, Codex 5.3, and the post-benchmark era
↗
-
20
Why Nvidia builds open models with Bryan Catanzaro
↗