Self-Supervised Learning Methods

Self-supervised learning is a popular method for training models across many fields of AI. It represents a paradigm that does away with one of the toughest challenges of machine learning applications - labelling a dataset. Self-supervised learning leverages massive unlabelled databases by creating its own labels to learn useful representations of data which can later be used for downstream tasks, often leveraging some unrelated training objective. In this article, we’ll discuss the key self-supervised learning methods and how they relate to each other.

Read More

30 Must Read AI Papers

Whilst the popularity of AI continues to soar, it can feel like we’re making so many new discoveries every day. It’s easy to get lost in the current literature, but it’s important to take a step back and put the field in context. If you’re new to the AI field it’s a great opportunity to make connections, or if you’ve long been around, a chance to review. The following is a list of the top 30 AI research papers compiled based on their impact to the overall AGI discourse, ordered chronologically. (AI is a wider field, e.g. computer vision, this list focuses on our path to AGI-level models.)

Read More

Deep Equilibrium Models and the Implicit Function Theorem

Deep Equilibrium Models (DEQs) are a class of models which represent ‘infinite depth’ neural networks through the use of recursion. Think Recurrent Neural Networks, but instead of recurring in time, they recur in depth. DEQs are interesting because with this depth they’re able to represent complex reasoning approaches which require many steps, compared to something like a 16 layer transformer. DEQs have demonstrated competitive performance in language modeling and vision tasks—often improving accuracy while reducing memory by up to 88%.

Read More

The Hierarchical Reasoning Model

The Hierarchical Reasoning Model (HRM) introduces a biologically inspired recurrent architecture designed to overcome the reasoning limitations of standard Transformers and Chain-of-Thought (CoT) prompting. Comprising two interdependent modules—a slow, high-level planner and a fast, low-level executor—HRM achieves deep computational reasoning in a single forward pass without pretraining or intermediate supervision. With just 27M parameters and 1,000 training examples, it surpasses much larger models on benchmarks like ARC-AGI, Sudoku-Extreme, and Maze-Hard, demonstrating near-perfect accuracy on tasks that typically require symbolic search and backtracking.

Read More

Understanding Recurrence in Modern Models

We’ve all heard of recurrent neural networks (RNNs), the workhorse of sequence modeling for decades. RNNs explicitly model sequences by maintaining a hidden state that evolves over time, allowing the network to ‘remember’ information from previous inputs. But recurrence isn’t limited to RNNs. In fact, there are many ways that modern models implement some form of recurrence, often in unexpected ways.

Read More