Agents, both natural and artificial, perform actions in order to bring about goals. Planning even simple actions can be a complex process, often requiring the coordination of many smaller actions, and often requiring actions to be taken in order to obtain information needed to plan further actions. In order to be successful, the agent’s actions must in some way anticipate their effects. This suggests that there are deep links between information theory, action planning, and the mathematics of causality.
In order to understand these relationships formally, we build on an idea from robotic control known as “planning as inference.” Conceptually, this procedure consists of imagining that the goal has already been achieved, and then inferring the most likely action sequence (or more broadly, the most likely action policy) that could have brought it about. This leads to some surprising connections between action planning and statistical inference. We explore these using techniques from information geometry, in order to build a mathematical theory in which information flow and intentional actions can be understood as part of the same picture. This yields conceptual insights into the causal relationships in both directions between actions and their effects, the difficulty of coordinating actions across time and between agents, and the role played by information.
Ultimately, we hope that this will be useful not for understanding cognition and designing artificial agents, but also for understanding what it means to be an agent, and why goal-directed agents exist in the physical world at all.