Compositional Reinforcement Learning
In this line of work, we are interested in techniques that allow an agent to leverage past knowledge to solve new tasks quickly. In particular, we focus on how agents can acquire behaviours that can be combined to generate interesting, novel abilities. One particular focus is on applying Boolean operators to learned behaviours to generate provably optimal solutions to new tasks. Not only are these approaches human-understandable, but they result in a combinatorial explosion in an agent's abilities, which is key to tackling the multitask or lifelong learning setting.
Skills & Symbols
Here we focus on learning abstract representations, which we believe are an important component if we are ever to apply reinforcement learning to the real world. In particular, we focus on skill- and symbol-discovery, as well as the interplay between the two. We have applied our approaches to challenging pixel-based tasks that require high-level planning, and have shown that symbolic representations can be learned directly from low-level sensor data.
Theory of Mind
Our research here focuses on modelling external agents in an environment. These agents may be other robots or humans that have their own goals or intentions that are not directly observable by our agent. Inferring this information through observation or communication can allow agents to better collaborate to achieve the required task optimally. In particular, as robots become more ubiquitous in the real world, human-robot interaction will be key to ensuring productive and safe environments.
Our group competes in the RoboCupSoccer 3D Simulation, in which a team of 11 simulated Nau robots compete in a game of football against other teams from around the world. Our focus here is both on improving the low-level control of individual robots and incorporating high-level, multi-agent decision making into the team's strategy.