Q-learning using fuzzified states and weighted actions and its application to omni-direnctional mobile robot control
The conventional Q-learning algorithm is described by a finite number of discretized states and discretized actions. When the system is represented in continuous domain, this may cause an abrupt transition of action as the state rapidly changes. To avoid this abrupt transition of action, the learnin...
Saved in:
| Published in | 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation pp. 102 - 107 |
|---|---|
| Main Authors | , , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
01.12.2009
|
| Subjects | |
| Online Access | Get full text |
| ISBN | 1424448085 9781424448081 |
| DOI | 10.1109/CIRA.2009.5423227 |
Cover
| Summary: | The conventional Q-learning algorithm is described by a finite number of discretized states and discretized actions. When the system is represented in continuous domain, this may cause an abrupt transition of action as the state rapidly changes. To avoid this abrupt transition of action, the learning system requires fine-tuned states. However, the learning time significantly increases and the system becomes computationally expensive as the number of states increases. To solve this problem, this paper proposes a novel Q-learning algorithm, which uses fuzzified states and weighted actions to update its state-action value. By applying the concept of fuzzy set to the states of Q-learning and using the weighted actions, the agent efficiently responds to the rapid changes of the states. The proposed algorithm is applied to omni-directional mobile robot and the results demonstrate the effectiveness of the proposed approach. |
|---|---|
| ISBN: | 1424448085 9781424448081 |
| DOI: | 10.1109/CIRA.2009.5423227 |