Skill composition is a growing area of interest within Reinforcement Learning (RL) research. For example, if designing a robot for household assistance, we can not feasibly train it for every task it might face during a period of months or years. Instead, it could benefit from having a set of broadly useful skills and the ability to adapt or combine these skills to deal with specific situations. This approach mimics the way in which humans actually learn—by gradually expanding their knowledge base of basic skills which can be combined in endless ways, instead of independently learning new abilities from scratch each time a novel problem arises. Existing work has demonstrated how simple skills can be composed using Boolean operators to solve new, unseen tasks without further learning. However, this approach assumes that the learned value functions for each atomic skill are optimal, an assumption which is violated in most practical cases. We propose a method that instead learns operators for composition using evolutionary strategies. We empirically verify our approach in tabular and high-dimensional environments. Results demonstrate that our approach outperforms existing composition methods when faced with learned, suboptimal behaviours, while also promoting robust agents and allowing for transfer between domains.