Reconstructing Pixel Blocks

Generative models represent a wide class of systems that are able to create new samples which follow some training distribution. Variational Autoencoders (VAEs) are an example of these models which are trained through a bottleneck latent space, forcing the model to learn meaningful representations of its inputs.

Read More

Deep Equilibrium Models and the Implicit Function Theorem

Deep Equilibrium Models (DEQs) are a class of models which represent ‘infinite depth’ neural networks through the use of recursion. Think Recurrent Neural Networks, but instead of recurring in time, they recur in depth. DEQs are interesting because with this depth they’re able to represent complex reasoning approaches which require many steps, compared to something like a 16 layer transformer. DEQs have demonstrated competitive performance in language modeling and vision tasks—often improving accuracy while reducing memory by up to 88%.

Read More

The Hierarchical Reasoning Model

The Hierarchical Reasoning Model (HRM) introduces a biologically inspired recurrent architecture designed to overcome the reasoning limitations of standard Transformers and Chain-of-Thought (CoT) prompting. Comprising two interdependent modules—a slow, high-level planner and a fast, low-level executor—HRM achieves deep computational reasoning in a single forward pass without pretraining or intermediate supervision. With just 27M parameters and 1,000 training examples, it surpasses much larger models on benchmarks like ARC-AGI, Sudoku-Extreme, and Maze-Hard, demonstrating near-perfect accuracy on tasks that typically require symbolic search and backtracking.

Read More

Understanding Recurrence in Modern Models

We’ve all heard of recurrent neural networks (RNNs), the workhorse of sequence modeling for decades. RNNs explicitly model sequences by maintaining a hidden state that evolves over time, allowing the network to ‘remember’ information from previous inputs. But recurrence isn’t limited to RNNs. In fact, there are many ways that modern models implement some form of recurrence, often in unexpected ways.

Read More

ARC-AGI 3 2025 July Demo

ARC-AGI-3 is the latest challenge in Francois Chollet’s ARC Prize. While currently still under development, the authors released a sample of the challenges and are running a small competition. In this post I’d like to discuss my attempt at ‘hand-writing’ some solutions and what it told me about a real solution. If you’d like to know more about the ARC-AGI-3 challenge, I previously wrote about it here.

Read More

Visualising Code Diffs

When we write code we typically leverage something like Git or Mercurial to track changes we make to files. These systems make it easy to see what has changed at a glance without picking through every line of code. Personally, I use VSCode’s Git integration. But who writes code themselves these days? When producing code with LLMs we still want to be able to track diffs easily.

Read More