Augmentative Topology Agents For Open-ended Learning

Abstract

We tackle the problem of open-ended learning by introducing a method that simultaneously evolves agents while also evolving increasingly challenging environments. Unlike previous open-ended approaches that optimize agents using a fixed neural network topology, we hypothesize that generalization can be improved by allowing agents’ controllers to become more complex as they encounter more difficult environments. Our method, Augmentative Topology EPOET (ATEP), extends the Enhanced Paired Open-Ended Trailblazer (EPOET) algorithm by allowing agents to evolve their own neural network structures over time, adding complexity and capacity as necessary. Our empirical results demonstrate that ATEP produces general agents capable of solving more environments than fixed-topology baselines. We also investigate mechanisms for transferring agents between environments and find that a species-based approach further improves the performance and generalization of agents.

Publication
Genetic and Evolutionary Computation Conference Companion
Muhammad Umair Nasir
Muhammad Umair Nasir

I love tackling challenges in Open-ended Learning and Jiu Jitsu.

Michael Beukman
Michael Beukman

I like doing cool things, such as generating levels in Minecraft, teaching robots how to kick a ball and I do rock climbing in my spare time.

Steven James
Steven James
Deputy Lab Director

My research interests include reinforcement learning and planning.

Christopher  Cleghorn
Christopher Cleghorn
Senior Applied Scientist