Nelder Mead Optimisation

Optimisation is at the core of AI research. We spawn instances of massive models with trillions of parameters and then try to optimise their parameters towards some goal, typically represented by a loss function. We’ve become really good at this type of optimisation because it has a key property: we can calculate the gradient. Packages such as PyTorch automatically calculate the expected changes in our loss function if we were to tweak parameters (the gradients) which allows us to make meaningful progress towards the goal. But what if you don’t have gradients?

Read More

AI Playgrounds - The Birth of Something Brilliant

The traditional AI agents that we see in the news, such as AlphaGo, Pokerbot were trained in well-scoped environments, but as we look to the future, we want AI that are capable of working towards more complex tasks in richer environments. We’ve seen some really amazing environments and agents pop-up recently, so let’s explore them! Across these environments, agents have been programmed to acquire knowledge, have complex social interactions, learn how to interact with the physical world and been thrown in the deep-end with virtually unlimited access to the world’s resources.

Read More

Self-Supervised Learning Methods

Self-supervised learning is a popular method for training models across many fields of AI. It represents a paradigm that does away with one of the toughest challenges of machine learning applications - labelling a dataset. Self-supervised learning leverages massive unlabelled databases by creating its own labels to learn useful representations of data which can later be used for downstream tasks, often leveraging some unrelated training objective. In this article, we’ll discuss the key self-supervised learning methods and how they relate to each other.

Read More

30 Must Read AI Papers

Whilst the popularity of AI continues to soar, it can feel like we’re making so many new discoveries every day. It’s easy to get lost in the current literature, but it’s important to take a step back and put the field in context. If you’re new to the AI field it’s a great opportunity to make connections, or if you’ve long been around, a chance to review. The following is a list of the top 30 AI research papers compiled based on their impact to the overall AGI discourse, ordered chronologically. (AI is a wider field, e.g. computer vision, this list focuses on our path to AGI-level models.)

Read More

Deep Equilibrium Models and the Implicit Function Theorem

Deep Equilibrium Models (DEQs) are a class of models which represent ‘infinite depth’ neural networks through the use of recursion. Think Recurrent Neural Networks, but instead of recurring in time, they recur in depth. DEQs are interesting because with this depth they’re able to represent complex reasoning approaches which require many steps, compared to something like a 16 layer transformer. DEQs have demonstrated competitive performance in language modeling and vision tasks—often improving accuracy while reducing memory by up to 88%.

Read More