Q-learning using fuzzified states and weighted actions and its application to omni-direnctional mobile robot control

The conventional Q-learning algorithm is described by a finite number of discretized states and discretized actions. When the system is represented in continuous domain, this may cause an abrupt transition of action as the state rapidly changes. To avoid this abrupt transition of action, the learnin...

Full description

Saved in:
Bibliographic Details
Published in2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation pp. 102 - 107
Main Authors Dong-Hyun Lee, In-Won Park, Jong-Hwan Kim
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2009
Subjects
Online AccessGet full text
ISBN1424448085
9781424448081
DOI10.1109/CIRA.2009.5423227

Cover

More Information
Summary:The conventional Q-learning algorithm is described by a finite number of discretized states and discretized actions. When the system is represented in continuous domain, this may cause an abrupt transition of action as the state rapidly changes. To avoid this abrupt transition of action, the learning system requires fine-tuned states. However, the learning time significantly increases and the system becomes computationally expensive as the number of states increases. To solve this problem, this paper proposes a novel Q-learning algorithm, which uses fuzzified states and weighted actions to update its state-action value. By applying the concept of fuzzy set to the states of Q-learning and using the weighted actions, the agent efficiently responds to the rapid changes of the states. The proposed algorithm is applied to omni-directional mobile robot and the results demonstrate the effectiveness of the proposed approach.
ISBN:1424448085
9781424448081
DOI:10.1109/CIRA.2009.5423227