World Value Functions: Knowledge Representation for Learning and Planning

Abstract

We propose world value functions (WVFs), a type of goaloriented general value function that represents how to solve not just a given task, but any other goal-reaching task in an agent’s environment. This is achieved by equipping an agent with an internal goal space defined as all the world states where it experiences a terminal transition. The agent can then modify the standard task rewards to define its own reward function, which provably drives it to learn how to achieve all reachable internal goals, and the value of doing so in the current task. We demonstrate two key benefits of WVFs in the context of learning and planning. In particular, given a learned WVF, an agent can compute the optimal policy in a new task by simply estimating the task’s reward function. Furthermore, we show that WVFs also implicitly encode the transition dynamics of the environment, and so can be used to perform planning. Experimental results show that WVFs can be learned faster than regular value functions, while their ability to infer the environment’s dynamics can be used to integrate learning and planning methods to further improve sample efficiency.

Publication
Bridging the Gap Between AI Planning and Reinforcement Learning Workshop at ICAPS
Geraud Nangue Tasse
Geraud Nangue Tasse
Associate Lecturer

I am an IBM PhD fellow interested in reinforcement learning (RL) since it is the subfield of machine learning with the most potential for achieving AGI.

Benjamin Rosman
Benjamin Rosman
Lab Director

I am a Professor in the School of Computer Science and Applied Mathematics at the University of the Witwatersrand in Johannesburg. I work in robotics, artificial intelligence, decision theory and machine learning.

Steven James
Steven James
Deputy Lab Director

My research interests include reinforcement learning and planning.