Reinforcement Learning with Parameterized Actions

Abstract

We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions—discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goalscoring and Platform domains.

Publication
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence
Pravesh Ranchod
Pravesh Ranchod
Lecturer

I am a Lecturer in the School of Computer Science and Applied Mathematics at the University of the Witwatersrand