Reinforcement Learning Generalization Using State Aggregation with a Maze-Solving Problem.

2017-02-01T13:55:49Z (GMT) by Walid Gomaa Mohamed Gunady
<div>Conference paper: In Proc. of the 2012 Japan-Egypt IEEE Conference on Electronics, Communications, and Computers (JEC-ECC 2012), pages 157–162, Alexandria, Egypt, March 2012.</div><div><br></div><div>Abstract:</div>Reinforcement learning (RL) depends on constructing a lookup table for the value function of state-action pairs. Consequently, when learning in environments with large-scale state-action space, RL fails to achieve practical convergence rates. Therefore, the need for generalizing the original state-action space into more compact representation is crucial for many practical applications. In this paper, we propose a generalization technique using 'state aggregation'. We apply this generalization technique to Q-learning, and show how to aggregate similar states together. The modified RL system architecture is presented along with the new algorithm. The proposed approach is tested and analyzed on a maze problem.