Why Autonomous Software?

Jye Sawtell-Rickson · March 2, 2025

With “agents” the hot topic of 2025, we should take a step back and ask ourselves, why do we want autonomous software? Let’s explore some reasons we’d want agents making decisions themselves.

There are various situations that come to mind, robot in a rescue zone, AI trading bot, these have various characteristics which urge us to create autonomous agents. We can define some of these key characteristics:

  • Speed: if decisions must be made rapidly, then it’s useful to have autonomous agent. Examples include a high-frequency trading robot, or an agent which controls a vehicle.
  • Accessibility: if decisions must be made in locations that are inaccessible, then it’s best they’re made autonomously. Examples include a rescue robot in a disaster zone with poor connectivity, or a robot operating behind enemy lines during a war.
  • Complexity: if a decision is so complex that a human cannot possibly comprehend the reasoning steps behind it, then an agent that can fully comprehend the problem can be better. Examples of this could be an agent that must make decisions about the economic principles to govern a country.
  • Bias: if a decision is too difficult for people to make without bias, but they want it to be unbiased, an agent can be put in control of the decision making. Examples of this include, war-time protocols for mutually assured destruction (MAD) where a human may be incapable of responding due to their ethics, but a robot can make the decision without hesitation.
  • Reliability: if an agent must continually make decisions, regardless of the environment effects, it could be better to have an autonomous agent to make the decision. An example of this includes an agent that must continually make decisions as a CEO for a multinational company.

The above presented characteristics all represent good reasons to have autonomous agents as opposed to passive agents with a human in the loop. However, one could argue that the majority of these ‘autonomous’ situations are just passive systems following what they were programmed to do. For example, in the case of making decisions under speed, a system could be just following what it was pre-programmed to do. Based on this, I’d argue that the one key characteristic is the one described as “complexity”.

When it comes to making complex decisions above human and passive agent reasoning, it is possible that humans could set up a learning process that they don’t have direct control over which leads to an agent that can learn on its own, improve itself and make decisions. This is a great application for autonomous agents, but also the most dangerous one that the AI safety community is working to understand.

Twitter, Facebook