Hello, Obsidian!

I’ve been a long-time user of Notion for knowledge management. For reference, I have around 1000 pages covering various topics from physics, to sports, to dreams and aspirations. I manage my to-do lists there, travel plans and goal tracking. It’s a terrific visual tool and readily accessible everywhere. But I want more.

Read More

Coding World Models with LLMs

Recently I ran an experiment in which I tried to get LLMs to output explicit world models in the form of Python code which could be used as the core of a MCTS policy for control in basic gymnasium environments. This is different to typical world model learning which would be something like a parameterised neural network. The reason why we would care about something like this is for on-the-fly reasoning, such as in ARC-AGI-3 where agents must act in a completely new environment.

Read More

Normalising a Short AGI Horizon

AI 2027 was recently released and is another great forecast of how the next few years could play out, similar to another favourite of mine, Situational Awareness. While listening to the authors on the Dwarkesh Podcast I tried to look a bit deeper on my own thoughts. Why do I also believe AGI is coming soon? Why does it still seem so fantastical, despite my belief? To help myself, I decided to jot down the key points that have influenced that thinking. It’s far from an exhaustive list, but are some of the latest facts that help me to stay grounded in my predictions.

Read More

Think it Faster in AI

Ever since reading “How could I have thought that faster?” this year, I have been trying to put it into practice. Working with AI models, I found there are plenty of opportunities. One can spend hours on some buggy code just to find the bug they fixed wasn’t the real problem after all, or that someone had already solved it five years ago on Stack Overflow. One can invests hours into modelling to realise that there was a far simpler approach if you just thought about it from another angle…

Read More