Reinforcement Learning-based control using Q-learning and gravitational search algorithm with experimental validation on a nonlinear servo system
•A combination of Deep Q-Learning algorithm and metaheuristic GSA is offered.•GSA initializes the weights and the biases of the neural networks.•A comparison with classical random, metaheuristic PSO and GWO is carried out.•The validation is done on real-time nonlinear servo system position control.•...
Saved in:
| Published in | Information sciences Vol. 583; pp. 99 - 120 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
Elsevier Inc
01.01.2022
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0020-0255 1872-6291 |
| DOI | 10.1016/j.ins.2021.10.070 |
Cover
| Abstract | •A combination of Deep Q-Learning algorithm and metaheuristic GSA is offered.•GSA initializes the weights and the biases of the neural networks.•A comparison with classical random, metaheuristic PSO and GWO is carried out.•The validation is done on real-time nonlinear servo system position control.•The drawbacks of randomly initialized neural networks are mitigated.
This paper presents a novel Reinforcement Learning (RL)-based control approach that uses a combination of a Deep Q-Learning (DQL) algorithm and a metaheuristic Gravitational Search Algorithm (GSA). The GSA is employed to initialize the weights and the biases of the Neural Network (NN) involved in DQL in order to avoid the instability, which is the main drawback of the traditional randomly initialized NNs. The quality of a particular set of weights and biases is measured at each iteration of the GSA-based initialization using a fitness function aiming to achieve the predefined optimal control or learning objective. The data generated during the RL process is used in training a NN-based controller that will be able to autonomously achieve the optimal reference tracking control objective. The proposed approach is compared with other similar techniques which use different algorithms in the initialization step, namely the traditional random algorithm, the Grey Wolf Optimizer algorithm, and the Particle Swarm Optimization algorithm. The NN-based controllers based on each of these techniques are compared using performance indices specific to optimal control as settling time, rise time, peak time, overshoot, and minimum cost function value. Real-time experiments are conducted in order to validate and test the proposed new approach in the framework of the optimal reference tracking control of a nonlinear position servo system. The experimental results show the superiority of this approach versus the other three competing approaches. |
|---|---|
| AbstractList | •A combination of Deep Q-Learning algorithm and metaheuristic GSA is offered.•GSA initializes the weights and the biases of the neural networks.•A comparison with classical random, metaheuristic PSO and GWO is carried out.•The validation is done on real-time nonlinear servo system position control.•The drawbacks of randomly initialized neural networks are mitigated.
This paper presents a novel Reinforcement Learning (RL)-based control approach that uses a combination of a Deep Q-Learning (DQL) algorithm and a metaheuristic Gravitational Search Algorithm (GSA). The GSA is employed to initialize the weights and the biases of the Neural Network (NN) involved in DQL in order to avoid the instability, which is the main drawback of the traditional randomly initialized NNs. The quality of a particular set of weights and biases is measured at each iteration of the GSA-based initialization using a fitness function aiming to achieve the predefined optimal control or learning objective. The data generated during the RL process is used in training a NN-based controller that will be able to autonomously achieve the optimal reference tracking control objective. The proposed approach is compared with other similar techniques which use different algorithms in the initialization step, namely the traditional random algorithm, the Grey Wolf Optimizer algorithm, and the Particle Swarm Optimization algorithm. The NN-based controllers based on each of these techniques are compared using performance indices specific to optimal control as settling time, rise time, peak time, overshoot, and minimum cost function value. Real-time experiments are conducted in order to validate and test the proposed new approach in the framework of the optimal reference tracking control of a nonlinear position servo system. The experimental results show the superiority of this approach versus the other three competing approaches. |
| Author | Petriu, Emil M. Zamfirache, Iuliu Alexandru Roman, Raul-Cristian Precup, Radu-Emil |
| Author_xml | – sequence: 1 givenname: Iuliu Alexandru surname: Zamfirache fullname: Zamfirache, Iuliu Alexandru email: iuliu.zamfirache@student.upt.ro organization: Politehnica University of Timisoara, Department of Automation and Applied Informatics, Bd. V. Parvan 2, 300223 Timisoara, Romania – sequence: 2 givenname: Radu-Emil surname: Precup fullname: Precup, Radu-Emil email: radu.precup@aut.upt.ro organization: Politehnica University of Timisoara, Department of Automation and Applied Informatics, Bd. V. Parvan 2, 300223 Timisoara, Romania – sequence: 3 givenname: Raul-Cristian surname: Roman fullname: Roman, Raul-Cristian email: raul.roman@aut.upt.ro organization: Politehnica University of Timisoara, Department of Automation and Applied Informatics, Bd. V. Parvan 2, 300223 Timisoara, Romania – sequence: 4 givenname: Emil M. surname: Petriu fullname: Petriu, Emil M. email: petriu@uottawa.ca organization: University of Ottawa, School of Electrical Engineering and Computer Science, 800 King Edward, Ottawa, ON K1N 6N5 Canada |
| BookMark | eNp9kNtKAzEQhoNUsK0-gHd5ga1Jut108UqKJyiIotchm8y2KWlSkrjax_CNzdpeeVEYZpjDN_D_IzRw3gFC15RMKKHVzWZiXJwwwmjuJ4STMzSkc86KitV0gIaEMFIQNptdoFGMG0JIyatqiH7ewLjWBwVbcAkvQQZn3KpoZASNlXcpeIs_Y57h18Ie11g6jVdBdibJZLyTFse8Umss7coHk9Zb_JUzhu8dBNO_ziedtEb_3eMcEmcJ1rjMZTh0Hsd9TLC9ROettBGujnWMPh7u3xdPxfLl8XlxtywUq3kqJJWsmdfQlo2qqdZMVcB005BpWZcSCKU1Y43S0GatGuZN7iRUSnFeclK30zHih78q-BgDtEId1aQgjRWUiN5ZsRHZWdE724-ys5mk_8hd1ijD_iRze2AgS-oMBBGVAadAmwAqCe3NCfoXF4mZCw |
| CitedBy_id | crossref_primary_10_1016_j_engappai_2023_106128 crossref_primary_10_1007_s10489_022_03792_x crossref_primary_10_23939_istcmtm2024_02_040 crossref_primary_10_3390_math11051094 crossref_primary_10_1016_j_engappai_2022_105277 crossref_primary_10_1155_2023_3560441 crossref_primary_10_1016_j_asoc_2024_111250 crossref_primary_10_1016_j_engappai_2023_105838 crossref_primary_10_1007_s10489_023_04955_0 crossref_primary_10_1109_TIM_2022_3225023 crossref_primary_10_1109_TCYB_2023_3312320 crossref_primary_10_1016_j_engappai_2024_108272 crossref_primary_10_1109_TCYB_2023_3312047 crossref_primary_10_1016_j_asoc_2022_109450 crossref_primary_10_1016_j_eswa_2023_119999 crossref_primary_10_1155_2022_6976875 crossref_primary_10_1007_s12555_022_1127_z crossref_primary_10_3390_su142114590 crossref_primary_10_3390_machines12120831 crossref_primary_10_1016_j_ins_2022_11_094 crossref_primary_10_1007_s10586_024_04381_y crossref_primary_10_1016_j_mlwa_2022_100409 crossref_primary_10_1109_TSMC_2024_3431453 crossref_primary_10_1016_j_eswa_2022_118139 crossref_primary_10_3390_act13090376 crossref_primary_10_3390_aerospace9030138 crossref_primary_10_1016_j_ins_2024_120250 crossref_primary_10_1007_s00521_024_10741_x crossref_primary_10_1016_j_compbiomed_2023_107210 crossref_primary_10_1007_s10489_022_03962_x crossref_primary_10_1016_j_eswa_2023_120112 crossref_primary_10_1109_TFUZZ_2022_3170646 crossref_primary_10_1007_s42979_025_03750_7 crossref_primary_10_1016_j_ins_2023_03_138 crossref_primary_10_3390_jmse10030383 crossref_primary_10_1016_j_engappai_2023_106469 crossref_primary_10_1016_j_neunet_2023_08_044 crossref_primary_10_3390_a15090326 crossref_primary_10_1109_TCYB_2022_3198997 crossref_primary_10_1002_widm_1548 crossref_primary_10_59277_ROMJIST_2023_1_07 crossref_primary_10_37394_23203_2024_19_22 crossref_primary_10_1016_j_asoc_2024_112316 crossref_primary_10_1016_j_engappai_2023_106353 crossref_primary_10_1007_s00521_023_08243_3 crossref_primary_10_1016_j_engappai_2022_105488 crossref_primary_10_1016_j_ins_2022_08_079 crossref_primary_10_1109_TCYB_2023_3325425 crossref_primary_10_1016_j_prime_2024_100537 crossref_primary_10_1016_j_ins_2022_08_114 crossref_primary_10_1016_j_ins_2023_118949 crossref_primary_10_1016_j_ces_2024_120762 crossref_primary_10_1007_s12530_023_09495_z crossref_primary_10_1109_TMECH_2022_3231405 crossref_primary_10_1007_s10462_023_10613_1 crossref_primary_10_3390_sym14030623 crossref_primary_10_1016_j_swevo_2024_101841 crossref_primary_10_1007_s11042_024_19918_x crossref_primary_10_1016_j_engappai_2023_106242 crossref_primary_10_1016_j_engappai_2023_106240 crossref_primary_10_1016_j_ins_2023_01_005 crossref_primary_10_1109_TFUZZ_2022_3205045 crossref_primary_10_1007_s42235_024_00548_w crossref_primary_10_1109_TFUZZ_2022_3170984 crossref_primary_10_1007_s00521_022_07775_4 crossref_primary_10_1007_s12530_022_09439_z crossref_primary_10_3390_math10203853 crossref_primary_10_1016_j_ins_2022_08_028 crossref_primary_10_1016_j_ifacol_2023_10_578 crossref_primary_10_1016_j_ins_2023_03_043 crossref_primary_10_3390_electronics11203372 crossref_primary_10_1016_j_eswa_2023_119672 crossref_primary_10_1016_j_ins_2023_02_005 crossref_primary_10_1371_journal_pone_0283207 crossref_primary_10_1016_j_asoc_2024_111687 crossref_primary_10_1038_s41598_024_75374_5 crossref_primary_10_1007_s10489_022_04227_3 crossref_primary_10_1016_j_eswa_2023_121238 crossref_primary_10_1177_00202940221144476 crossref_primary_10_1016_j_heliyon_2024_e31771 crossref_primary_10_1016_j_eswa_2023_120145 crossref_primary_10_1007_s10489_022_03786_9 crossref_primary_10_1371_journal_pone_0310365 crossref_primary_10_1016_j_engappai_2023_106050 crossref_primary_10_1109_JAS_2024_124227 crossref_primary_10_1016_j_eswa_2023_121085 crossref_primary_10_1016_j_eswa_2024_125388 crossref_primary_10_1016_j_conengprac_2023_105485 crossref_primary_10_1016_j_ins_2022_08_019 crossref_primary_10_1007_s10586_024_04501_8 crossref_primary_10_1016_j_eswa_2022_119421 crossref_primary_10_1109_TSMC_2023_3247466 crossref_primary_10_1109_TCYB_2022_3211499 crossref_primary_10_1007_s12065_024_00922_x crossref_primary_10_1109_ACCESS_2024_3396909 crossref_primary_10_1109_TMECH_2022_3219115 crossref_primary_10_34133_research_0442 crossref_primary_10_1016_j_clet_2025_100891 crossref_primary_10_1080_03081079_2024_2326424 crossref_primary_10_1016_j_engappai_2022_105454 crossref_primary_10_1007_s10489_022_03879_5 crossref_primary_10_1016_j_engappai_2022_105738 crossref_primary_10_1016_j_engappai_2022_104805 crossref_primary_10_1109_TII_2022_3197204 crossref_primary_10_1016_j_eswa_2023_119770 crossref_primary_10_1016_j_eswa_2022_119416 crossref_primary_10_1016_j_ins_2024_121196 crossref_primary_10_1155_2022_2676545 crossref_primary_10_1002_asjc_2941 crossref_primary_10_1016_j_asoc_2022_109672 crossref_primary_10_1109_TNSE_2024_3473941 crossref_primary_10_1016_j_engappai_2022_105060 crossref_primary_10_1016_j_advengsoft_2023_103411 crossref_primary_10_1016_j_knosys_2023_110940 crossref_primary_10_1016_j_eswa_2023_119940 crossref_primary_10_1016_j_eswa_2022_119246 crossref_primary_10_1080_19397038_2023_2287478 crossref_primary_10_3233_ICA_220693 crossref_primary_10_1109_ACCESS_2024_3390679 crossref_primary_10_1016_j_ins_2024_121640 crossref_primary_10_1016_j_ins_2023_01_042 crossref_primary_10_1080_03081079_2022_2086542 crossref_primary_10_1016_j_engappai_2023_106040 crossref_primary_10_1186_s40537_023_00727_2 crossref_primary_10_1080_09500340_2024_2394959 |
| Cites_doi | 10.1109/TIE.2016.2607698 10.1002/9781119769262.ch9 10.1016/j.swevo.2011.02.002 10.1016/j.arcontrol.2018.09.005 10.1016/j.trc.2018.12.018 10.1016/j.asoc.2020.106099 10.1142/S0129183121501370 10.1109/BHI.2018.8333436 10.1109/MCS.2012.2214134 10.1016/j.enbuild.2018.03.051 10.1155/2021/8834324 10.1016/j.engappai.2017.07.005 10.1109/ICASERT.2019.8934450 10.1109/CEC.2019.8790035 10.1109/37.126844 10.1016/j.ins.2009.03.004 10.1109/EAIS.2013.6604098 10.1109/MSP.2017.2743240 10.24846/v29i4y202002 10.1109/TCYB.2019.2899654 10.1016/j.ins.2013.05.035 10.1016/j.neucom.2020.03.061 10.4018/IJSIR.2016070102 10.1016/j.ejcon.2020.08.001 10.1109/IRC.2019.00121 10.1016/j.advengsoft.2013.12.007 10.1007/s11047-009-9175-3 10.1016/j.ins.2020.09.043 10.1007/s10489-014-0645-7 10.1016/j.asoc.2020.106766 10.2514/6.2021-2563 10.1016/j.ins.2018.10.025 10.1016/j.asoc.2019.106010 10.1016/j.aei.2020.101097 10.1109/TIM.2020.2983531 10.15837/ijccc.2017.6.3111 10.1109/TFUZZ.2019.2917808 |
| ContentType | Journal Article |
| Copyright | 2021 Elsevier Inc. |
| Copyright_xml | – notice: 2021 Elsevier Inc. |
| DBID | AAYXX CITATION |
| DOI | 10.1016/j.ins.2021.10.070 |
| DatabaseName | CrossRef |
| DatabaseTitle | CrossRef |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Library & Information Science |
| EISSN | 1872-6291 |
| EndPage | 120 |
| ExternalDocumentID | 10_1016_j_ins_2021_10_070 S002002552101094X |
| GroupedDBID | --K --M --Z -~X .DC .~1 0R~ 1B1 1OL 1RT 1~. 1~5 29I 4.4 457 4G. 5GY 5VS 7-5 71M 8P~ 9JN 9JO AAAKF AAAKG AABNK AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AARIN AAXUO AAYFN ABAOU ABBOA ABEFU ABFNM ABJNI ABMAC ABTAH ABUCO ABXDB ABYKQ ACAZW ACDAQ ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADGUI ADJOM ADMUD ADTZH AEBSH AECPX AEKER AENEX AFFNX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIGVJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD APLSM ARUGR ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HAMUX HLZ HVGLF HZ~ H~9 IHE J1W JJJVA KOM LG9 LY1 M41 MHUIS MO0 MS~ N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SBC SDF SDG SDP SDS SES SEW SPC SPCBC SSB SSD SST SSV SSW SSZ T5K TN5 TWZ UHS WH7 WUQ XPP YYP ZMT ZY4 ~02 ~G- 77I AATTM AAXKI AAYWO AAYXX ABWVN ACLOT ACRPL ACVFH ADCNI ADNMO ADVLN AEIPS AEUPX AFJKZ AFPUW AGQPQ AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP CITATION EFKBS ~HD |
| ID | FETCH-LOGICAL-c297t-a1a2b89ef4bc91dd2c6e2dbb03494ae011922bcdef000de8b22bae6cc774709f3 |
| IEDL.DBID | .~1 |
| ISSN | 0020-0255 |
| IngestDate | Wed Oct 01 05:16:45 EDT 2025 Thu Apr 24 23:03:42 EDT 2025 Fri Feb 23 02:42:00 EST 2024 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Keywords | Gravitational search algorithm Q-learning Servo systems Optimal reference tracking control NN training Reinforcement learning |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c297t-a1a2b89ef4bc91dd2c6e2dbb03494ae011922bcdef000de8b22bae6cc774709f3 |
| PageCount | 22 |
| ParticipantIDs | crossref_citationtrail_10_1016_j_ins_2021_10_070 crossref_primary_10_1016_j_ins_2021_10_070 elsevier_sciencedirect_doi_10_1016_j_ins_2021_10_070 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | January 2022 2022-01-00 |
| PublicationDateYYYYMMDD | 2022-01-01 |
| PublicationDate_xml | – month: 01 year: 2022 text: January 2022 |
| PublicationDecade | 2020 |
| PublicationTitle | Information sciences |
| PublicationYear | 2022 |
| Publisher | Elsevier Inc |
| Publisher_xml | – name: Elsevier Inc |
| References | Rashedi (b0165) 2007 S. I. Meerza, M. Islam, M. M. Uzzal, Q-learning based particle swarm optimization algorithm for optimal path planning of swarm of mobile robots, in: Proc. 2019 1 Osaba, Del Ser, Camacho, Bilbao, Yang (b0250) 2020; 87 Sutton, Barto (b0005) 2017 Zhang, Li, Ha, Yin, Chen (b0210) 2020; 45 Derrac, García, Molina, Herrera (b0245) 2011; 1 Precup, Roman, Teban, Albu, Petriu, Pozna (b0040) 2020; 29 Liu, Zhao, Liu (b0120) 2020; 97 Galluppi, Formentin, Novara, Savaresi (b0035) 2019; 141 P. K. Ram, P. Kuila, GSA‐based approach for gene selection from microarray gene expression data, in: M. Srinivas, G. Sucharitha, A. Matta, P. Chatterjee (Eds.), Machine Learning Algorithms and Applications, Scrivener Publishing, Wiley, Beverly, MA, 2021, pp. 159–174. Mirjalili, Mirjalili, Lewis (b0145) 2014; 69 Li, Cheng, Shi, Huang (b0200) 2012 Olivas, Amaya, Ortiz-Bayliss, Conant-Pablos, Terashima-Marín (b0105) 2021; 2021 Hein, Hentschel, Runkler, Udluft (b0125) 2017; 65 Rashedi, Nezamabadi-pour, Saryazdi (b0170) 2009; 179 A. González Pérez, C. B. Allen, D. J. Poole, GSA-SOM: A metaheuristic optimisation algorithm guided by machine learning and application to aerodynamic design, in: Proc. AIAA Aviation 2021 Forum, Virtual Event, pp. 2563–2568. Arulkumaran, Deisenroth, Brundage, Bharath (b0025) 2017; 34 Mirjalili (b0155) 2015; 43 V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, Playing Atari with deep reinforcement learning, arXiv:1312.5602 (2013). Asha, Deep neural networks-based classification optimization by reducing the feature dimensionality with the variants of gravitational search algorithm, International Journal of Modern Physics C 32 (10) (2021) 2150137. Piperagkas, Georgoulas, Parsopoulos, Stylios, Likas (b0130) 2012 S. Halkjær, O. Winther, The effect of correlated input data on the dynamics of learning, in: Proc. 9 accessed 11 September 2021. Chen, Norford, Samuelson, Malkawi (b0070) 2018; 169 Precup, Teban, Albu, Borlea, Zamfirache, Petriu (b0235) 2020; 69 Olivas, Valdez, Melin, Sombra, Castillo (b0180) 2019; 476 Qu, Gai, Zhong, Zhang (b0160) 2020; 89 Lewis, Vrabie, Vamvoudakis (b0015) 2012; 32 Ferdaus, Pratama, Anavatti, Garratt, Pan (b0240) 2020; 28 Sutton, Barto, Williams (b0010) 1992; 12 D. Hein, A. Hentschel, T. Runkler, S. Udluft, Reinforcement learning with Particle Swarm Optimization Policy (PSO-P) in continuous state and action spaces, International Journal of Swarm Intelligence Research 7 (3) (2016) 23–42. Roman, Precup, Petriu (b0045) 2021; 58 Goulart, Pereira (b0115) 2020; 140 Buşoniu, de Bruin, Tolić, Kober, Palunko (b0020) 2018; 46 A. Sehgal, H. M. La, S. J. Louis, H. Nguyen, Deep reinforcement learning using genetic algorithm for parameter optimization, in: Proc. 2019 Third IEEE International Conference on Robotic Computing, Naples, Italy, 2019, pp. 596–601. Qi, Luo, Wu, Boriboonsomsin, Barth (b0065) 2019; 99 David, Precup, Petriu, Rădac, Preitl (b0225) 2013; 247 Chi, Hui, Huang, Hou (b0030) 2020; 50 Yu, Ren, Dong (b0085) 2020; 20 Precup, David, Petriu (b0150) 2017; 64 Precup, David (b0090) 2019 Dzitac, Filip, Manolescu (b0055) 2017; 12 Y.-X. Liu, H. Lu, S. Cheng, Y.-H. Shi, An adaptive online parameter control algorithm for particle swarm optimization based on reinforcement learning, in: Proc. 2019 IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 2019, pp. 815–822. Data obtained by 30 independent runs of four optimization algorithms Zheng, Xie, Lam, Wang (b0060) 2021; 548 F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, J. Clune, Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning, arXiv:1712.06567 (2017). T. Salimans, J. Ho, X. Chen, S. Sidor, I. Sutskever, Evolution strategies as a scalable alternative to reinforcement learning, arXiv:1703.03864 (2017). Huang, Liu, He, Ma, Lu, Su (b0080) 2020; 402 International Conference on Neural Information Processing Systems, Denver, CO, USA, 1996, pp. 169–175. International Conference on Advances in Science, Engineering and Robotics Technology, Dhaka, Bangladesh, 2019, pp. 1–5. P. Angelov, I. Škrjanc, S. Blažič, Robust evolving cloud-based controller for a hydraulic plant, in: Proc. 2013 IEEE Conference on Evolving and Adaptive Intelligent Systems, Singapore, 2013, pp. 1–8. P. D. Ngo, S. Wei, A. Holubová, J. Muzik, F. Godtliebsen, Reinforcement-learning optimal control for type-1 diabetes, in: Proc. 2018 IEEE EMBS International Conference on Biomedical & Health Informatics, Las Vegas, NV, USA, 2018, pp. 333–336. Rashedi, Nezamabadi-pour, Saryazdi (b0175) 2010; 9 Roman (10.1016/j.ins.2021.10.070_b0045) 2021; 58 Buşoniu (10.1016/j.ins.2021.10.070_b0020) 2018; 46 Chen (10.1016/j.ins.2021.10.070_b0070) 2018; 169 Hein (10.1016/j.ins.2021.10.070_b0125) 2017; 65 Ferdaus (10.1016/j.ins.2021.10.070_b0240) 2020; 28 Precup (10.1016/j.ins.2021.10.070_b0090) 2019 Precup (10.1016/j.ins.2021.10.070_b0235) 2020; 69 Zheng (10.1016/j.ins.2021.10.070_b0060) 2021; 548 Sutton (10.1016/j.ins.2021.10.070_b0010) 1992; 12 Mirjalili (10.1016/j.ins.2021.10.070_b0145) 2014; 69 10.1016/j.ins.2021.10.070_b0100 10.1016/j.ins.2021.10.070_b0220 Qu (10.1016/j.ins.2021.10.070_b0160) 2020; 89 10.1016/j.ins.2021.10.070_b0185 10.1016/j.ins.2021.10.070_b0140 Sutton (10.1016/j.ins.2021.10.070_b0005) 2017 10.1016/j.ins.2021.10.070_b0190 Derrac (10.1016/j.ins.2021.10.070_b0245) 2011; 1 Yu (10.1016/j.ins.2021.10.070_b0085) 2020; 20 10.1016/j.ins.2021.10.070_b0110 10.1016/j.ins.2021.10.070_b0230 10.1016/j.ins.2021.10.070_b0075 10.1016/j.ins.2021.10.070_b0195 Zhang (10.1016/j.ins.2021.10.070_b0210) 2020; 45 Olivas (10.1016/j.ins.2021.10.070_b0180) 2019; 476 10.1016/j.ins.2021.10.070_b0205 Lewis (10.1016/j.ins.2021.10.070_b0015) 2012; 32 Chi (10.1016/j.ins.2021.10.070_b0030) 2020; 50 Piperagkas (10.1016/j.ins.2021.10.070_b0130) 2012 Precup (10.1016/j.ins.2021.10.070_b0150) 2017; 64 Mirjalili (10.1016/j.ins.2021.10.070_b0155) 2015; 43 Olivas (10.1016/j.ins.2021.10.070_b0105) 2021; 2021 Liu (10.1016/j.ins.2021.10.070_b0120) 2020; 97 David (10.1016/j.ins.2021.10.070_b0225) 2013; 247 Osaba (10.1016/j.ins.2021.10.070_b0250) 2020; 87 Qi (10.1016/j.ins.2021.10.070_b0065) 2019; 99 10.1016/j.ins.2021.10.070_b0215 Huang (10.1016/j.ins.2021.10.070_b0080) 2020; 402 10.1016/j.ins.2021.10.070_b0135 Galluppi (10.1016/j.ins.2021.10.070_b0035) 2019; 141 Goulart (10.1016/j.ins.2021.10.070_b0115) 2020; 140 Dzitac (10.1016/j.ins.2021.10.070_b0055) 2017; 12 Precup (10.1016/j.ins.2021.10.070_b0040) 2020; 29 Rashedi (10.1016/j.ins.2021.10.070_b0175) 2010; 9 Rashedi (10.1016/j.ins.2021.10.070_b0170) 2009; 179 Rashedi (10.1016/j.ins.2021.10.070_b0165) 2007 Arulkumaran (10.1016/j.ins.2021.10.070_b0025) 2017; 34 10.1016/j.ins.2021.10.070_b0095 Li (10.1016/j.ins.2021.10.070_b0200) 2012 10.1016/j.ins.2021.10.070_b0050 |
| References_xml | – reference: D. Hein, A. Hentschel, T. Runkler, S. Udluft, Reinforcement learning with Particle Swarm Optimization Policy (PSO-P) in continuous state and action spaces, International Journal of Swarm Intelligence Research 7 (3) (2016) 23–42. – volume: 34 start-page: 26 year: 2017 end-page: 38 ident: b0025 article-title: Deep reinforcement learning: a brief survey publication-title: IEEE Signal Process Mag. – reference: F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, J. Clune, Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning, arXiv:1712.06567 (2017). – reference: P. K. Ram, P. Kuila, GSA‐based approach for gene selection from microarray gene expression data, in: M. Srinivas, G. Sucharitha, A. Matta, P. Chatterjee (Eds.), Machine Learning Algorithms and Applications, Scrivener Publishing, Wiley, Beverly, MA, 2021, pp. 159–174. – volume: 69 start-page: 46 year: 2014 end-page: 61 ident: b0145 article-title: Grey wolf optimizer publication-title: Adv. Eng. Softw. – volume: 12 start-page: 19 year: 1992 end-page: 22 ident: b0010 article-title: Reinforcement learning is direct adaptive optimal control publication-title: IEEE Control Syst. Mag. – reference: A. Sehgal, H. M. La, S. J. Louis, H. Nguyen, Deep reinforcement learning using genetic algorithm for parameter optimization, in: Proc. 2019 Third IEEE International Conference on Robotic Computing, Naples, Italy, 2019, pp. 596–601. – volume: 58 start-page: 373 year: 2021 end-page: 387 ident: b0045 article-title: Hybrid data-driven fuzzy active disturbance rejection control for tower crane systems publication-title: Eur. J. Control – volume: 46 start-page: 8 year: 2018 end-page: 28 ident: b0020 article-title: Reinforcement learning for control: performance, stability, and deep approximators publication-title: Annu. Rev. Control – volume: 28 start-page: 1542 year: 2020 end-page: 1556 ident: b0240 article-title: Generic evolving self-organizing neuro-fuzzy control of bio-inspired unmanned aerial vehicles publication-title: IEEE Trans. Fuzzy Syst. – reference: P. Angelov, I. Škrjanc, S. Blažič, Robust evolving cloud-based controller for a hydraulic plant, in: Proc. 2013 IEEE Conference on Evolving and Adaptive Intelligent Systems, Singapore, 2013, pp. 1–8. – volume: 87 year: 2020 ident: b0250 article-title: Community detection in networks using bio-inspired optimization: latest developments, new results and perspectives with a selection of recent meta-heuristics publication-title: Appl. Soft Comput. – start-page: 65 year: 2012 end-page: 72 ident: b0130 article-title: Integrating particle swarm optimization with reinforcement learning in noisy problems publication-title: Proc. 14 – volume: 29 start-page: 399 year: 2020 end-page: 410 ident: b0040 article-title: Model-free control of finger dynamics in prosthetic hand myoelectric-based control systems publication-title: Stud. Inf. Control – volume: 99 start-page: 67 year: 2019 end-page: 81 ident: b0065 article-title: Deep reinforcement learning enabled self-learning control for energy efficient driving publication-title: Transp. Res. Part C: Emerg. Technol. – reference: T. Salimans, J. Ho, X. Chen, S. Sidor, I. Sutskever, Evolution strategies as a scalable alternative to reinforcement learning, arXiv:1703.03864 (2017). – reference: S. I. Meerza, M. Islam, M. M. Uzzal, Q-learning based particle swarm optimization algorithm for optimal path planning of swarm of mobile robots, in: Proc. 2019 1 – volume: 43 start-page: 150 year: 2015 end-page: 161 ident: b0155 article-title: How effective is the grey wolf optimizer in training multi-layer perceptrons publication-title: Appl. Intelligence – volume: 97 start-page: 106766 year: 2020 ident: b0120 article-title: Fault tolerant tracking control for nonlinear systems with actuator failures through particle swarm optimization-based adaptive dynamic programming publication-title: Appl. Soft Comput. – volume: 20 start-page: 1 year: 2020 end-page: 8 ident: b0085 article-title: Supervised-actor-critic reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units publication-title: BMC Med. Inf. Decis. Making – year: 2019 ident: b0090 article-title: Nature-inspired Optimization Algorithms for Fuzzy Controlled Servo Systems – reference: , accessed 11 September 2021. – volume: 9 start-page: 727 year: 2010 end-page: 745 ident: b0175 article-title: BGSA: binary gravitational search algorithm publication-title: Nat. Comput. – volume: 247 start-page: 154 year: 2013 end-page: 173 ident: b0225 article-title: Gravitational search algorithm-based design of fuzzy control systems with a reduced parametric sensitivity publication-title: Inf. Sci. – volume: 140 year: 2020 ident: b0115 article-title: Autonomous pH control by reinforcement learning for electroplating industry wastewater publication-title: Comput. Chem. Eng. – reference: International Conference on Advances in Science, Engineering and Robotics Technology, Dhaka, Bangladesh, 2019, pp. 1–5. – volume: 476 start-page: 159 year: 2019 end-page: 175 ident: b0180 article-title: Interval type-2 fuzzy logic for dynamic parameter adaptation in a modified gravitational search algorithm publication-title: Inf. Sci. – start-page: 553 year: 2012 end-page: 558 ident: b0200 article-title: Brief introduction of Back Propagation (BP) neural network algorithm and its improvement publication-title: Advances in Computer Science and Information Engineering, Advances in Intelligent and Soft Computing – reference: Y.-X. Liu, H. Lu, S. Cheng, Y.-H. Shi, An adaptive online parameter control algorithm for particle swarm optimization based on reinforcement learning, in: Proc. 2019 IEEE Congress on Evolutionary Computation, Wellington, New Zealand, 2019, pp. 815–822. – reference: Asha, Deep neural networks-based classification optimization by reducing the feature dimensionality with the variants of gravitational search algorithm, International Journal of Modern Physics C 32 (10) (2021) 2150137. – reference: V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller, Playing Atari with deep reinforcement learning, arXiv:1312.5602 (2013). – volume: 50 start-page: 4358 year: 2020 end-page: 4369 ident: b0030 article-title: Adjacent-agent dynamic linearization-based iterative learning formation control publication-title: IEEE Trans. Cybern. – reference: S. Halkjær, O. Winther, The effect of correlated input data on the dynamics of learning, in: Proc. 9 – reference: Data obtained by 30 independent runs of four optimization algorithms, – reference: P. D. Ngo, S. Wei, A. Holubová, J. Muzik, F. Godtliebsen, Reinforcement-learning optimal control for type-1 diabetes, in: Proc. 2018 IEEE EMBS International Conference on Biomedical & Health Informatics, Las Vegas, NV, USA, 2018, pp. 333–336. – reference: A. González Pérez, C. B. Allen, D. J. Poole, GSA-SOM: A metaheuristic optimisation algorithm guided by machine learning and application to aerodynamic design, in: Proc. AIAA Aviation 2021 Forum, Virtual Event, pp. 2563–2568. – reference: International Conference on Neural Information Processing Systems, Denver, CO, USA, 1996, pp. 169–175. – volume: 169 start-page: 195 year: 2018 end-page: 205 ident: b0070 article-title: Optimal control of HVAC and window systems for natural ventilation through reinforcement learning publication-title: Energy Build. – year: 2007 ident: b0165 article-title: Gravitational Search Algorithm – volume: 179 start-page: 2232 year: 2009 end-page: 2248 ident: b0170 article-title: GSA: A gravitational search algorithm publication-title: Inf. Sci. – volume: 2021 start-page: 8834324 year: 2021 ident: b0105 article-title: Enhancing hyperheuristics for the knapsack problem through fuzzy logic publication-title: Comput. Intelligence Neurosci. – volume: 141 start-page: 1 year: 2019 end-page: 12 ident: b0035 article-title: Multivariable D2-IBC and application to vehicle stability control publication-title: ASME J. Dyn. Syst., Meas. Control – volume: 402 start-page: 50 year: 2020 end-page: 65 ident: b0080 article-title: Reinforcement learning-based control for nonlinear discrete-time systems with unknown control directions and control constraints publication-title: Neurocomputing – volume: 548 start-page: 233 year: 2021 end-page: 253 ident: b0060 article-title: Membership-function-dependent stability analysis and local controller design for T-S fuzzy systems: a space-enveloping approach publication-title: Inf. Sci. – volume: 12 start-page: 748 year: 2017 end-page: 789 ident: b0055 article-title: Fuzzy logic is not fuzzy: World-renowned computer scientist Lotfi A. Zadeh publication-title: Int. J. Comput. Commun. Control – volume: 1 start-page: 3 year: 2011 end-page: 18 ident: b0245 article-title: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms publication-title: Swarm Evol. Comput. – volume: 32 start-page: 76 year: 2012 end-page: 105 ident: b0015 article-title: Reinforcement learning and feedback control: using natural decision methods to design optimal adaptive controllers publication-title: IEEE Control Syst. Mag. – volume: 69 start-page: 4625 year: 2020 end-page: 4636 ident: b0235 article-title: Evolving fuzzy models for prosthetic hand myoelectric-based control publication-title: IEEE Trans. Instrum. Meas. – year: 2017 ident: b0005 article-title: Reinforcement Learning: An Introduction – volume: 45 year: 2020 ident: b0210 article-title: Reinforcement learning based optimizer for improvement of predicting tunneling-induced ground responses publication-title: Adv. Eng. Inf. – volume: 64 start-page: 527 year: 2017 end-page: 534 ident: b0150 article-title: Grey wolf optimizer algorithm-based tuning of fuzzy control systems with reduced parametric sensitivity publication-title: IEEE Trans. Ind. Electron. – volume: 89 year: 2020 ident: b0160 article-title: A novel reinforcement learning based grey wolf optimizer algorithm for Unmanned Aerial Vehicles (UAVs) path planning publication-title: Appl. Soft Comput. – volume: 65 start-page: 87 year: 2017 end-page: 98 ident: b0125 article-title: Particle swarm optimization for generating interpretable fuzzy reinforcement learning policies publication-title: Eng. Appl. Artif. Intell. – volume: 64 start-page: 527 issue: 1 year: 2017 ident: 10.1016/j.ins.2021.10.070_b0150 article-title: Grey wolf optimizer algorithm-based tuning of fuzzy control systems with reduced parametric sensitivity publication-title: IEEE Trans. Ind. Electron. doi: 10.1109/TIE.2016.2607698 – year: 2007 ident: 10.1016/j.ins.2021.10.070_b0165 – ident: 10.1016/j.ins.2021.10.070_b0185 doi: 10.1002/9781119769262.ch9 – volume: 1 start-page: 3 issue: 1 year: 2011 ident: 10.1016/j.ins.2021.10.070_b0245 article-title: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms publication-title: Swarm Evol. Comput. doi: 10.1016/j.swevo.2011.02.002 – volume: 46 start-page: 8 year: 2018 ident: 10.1016/j.ins.2021.10.070_b0020 article-title: Reinforcement learning for control: performance, stability, and deep approximators publication-title: Annu. Rev. Control doi: 10.1016/j.arcontrol.2018.09.005 – volume: 99 start-page: 67 year: 2019 ident: 10.1016/j.ins.2021.10.070_b0065 article-title: Deep reinforcement learning enabled self-learning control for energy efficient driving publication-title: Transp. Res. Part C: Emerg. Technol. doi: 10.1016/j.trc.2018.12.018 – volume: 89 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0160 article-title: A novel reinforcement learning based grey wolf optimizer algorithm for Unmanned Aerial Vehicles (UAVs) path planning publication-title: Appl. Soft Comput. doi: 10.1016/j.asoc.2020.106099 – ident: 10.1016/j.ins.2021.10.070_b0195 doi: 10.1142/S0129183121501370 – ident: 10.1016/j.ins.2021.10.070_b0075 doi: 10.1109/BHI.2018.8333436 – volume: 140 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0115 article-title: Autonomous pH control by reinforcement learning for electroplating industry wastewater publication-title: Comput. Chem. Eng. – volume: 32 start-page: 76 issue: 6 year: 2012 ident: 10.1016/j.ins.2021.10.070_b0015 article-title: Reinforcement learning and feedback control: using natural decision methods to design optimal adaptive controllers publication-title: IEEE Control Syst. Mag. doi: 10.1109/MCS.2012.2214134 – volume: 169 start-page: 195 year: 2018 ident: 10.1016/j.ins.2021.10.070_b0070 article-title: Optimal control of HVAC and window systems for natural ventilation through reinforcement learning publication-title: Energy Build. doi: 10.1016/j.enbuild.2018.03.051 – volume: 2021 start-page: 8834324 year: 2021 ident: 10.1016/j.ins.2021.10.070_b0105 article-title: Enhancing hyperheuristics for the knapsack problem through fuzzy logic publication-title: Comput. Intelligence Neurosci. doi: 10.1155/2021/8834324 – volume: 65 start-page: 87 year: 2017 ident: 10.1016/j.ins.2021.10.070_b0125 article-title: Particle swarm optimization for generating interpretable fuzzy reinforcement learning policies publication-title: Eng. Appl. Artif. Intell. doi: 10.1016/j.engappai.2017.07.005 – ident: 10.1016/j.ins.2021.10.070_b0140 doi: 10.1109/ICASERT.2019.8934450 – ident: 10.1016/j.ins.2021.10.070_b0215 doi: 10.1109/CEC.2019.8790035 – volume: 12 start-page: 19 issue: 2 year: 1992 ident: 10.1016/j.ins.2021.10.070_b0010 article-title: Reinforcement learning is direct adaptive optimal control publication-title: IEEE Control Syst. Mag. doi: 10.1109/37.126844 – ident: 10.1016/j.ins.2021.10.070_b0220 – ident: 10.1016/j.ins.2021.10.070_b0205 – volume: 179 start-page: 2232 issue: 13 year: 2009 ident: 10.1016/j.ins.2021.10.070_b0170 article-title: GSA: A gravitational search algorithm publication-title: Inf. Sci. doi: 10.1016/j.ins.2009.03.004 – ident: 10.1016/j.ins.2021.10.070_b0050 doi: 10.1109/EAIS.2013.6604098 – volume: 34 start-page: 26 issue: 6 year: 2017 ident: 10.1016/j.ins.2021.10.070_b0025 article-title: Deep reinforcement learning: a brief survey publication-title: IEEE Signal Process Mag. doi: 10.1109/MSP.2017.2743240 – volume: 29 start-page: 399 issue: 4 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0040 article-title: Model-free control of finger dynamics in prosthetic hand myoelectric-based control systems publication-title: Stud. Inf. Control doi: 10.24846/v29i4y202002 – volume: 50 start-page: 4358 issue: 10 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0030 article-title: Adjacent-agent dynamic linearization-based iterative learning formation control publication-title: IEEE Trans. Cybern. doi: 10.1109/TCYB.2019.2899654 – volume: 247 start-page: 154 year: 2013 ident: 10.1016/j.ins.2021.10.070_b0225 article-title: Gravitational search algorithm-based design of fuzzy control systems with a reduced parametric sensitivity publication-title: Inf. Sci. doi: 10.1016/j.ins.2013.05.035 – volume: 402 start-page: 50 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0080 article-title: Reinforcement learning-based control for nonlinear discrete-time systems with unknown control directions and control constraints publication-title: Neurocomputing doi: 10.1016/j.neucom.2020.03.061 – ident: 10.1016/j.ins.2021.10.070_b0135 doi: 10.4018/IJSIR.2016070102 – volume: 58 start-page: 373 year: 2021 ident: 10.1016/j.ins.2021.10.070_b0045 article-title: Hybrid data-driven fuzzy active disturbance rejection control for tower crane systems publication-title: Eur. J. Control doi: 10.1016/j.ejcon.2020.08.001 – ident: 10.1016/j.ins.2021.10.070_b0100 doi: 10.1109/IRC.2019.00121 – volume: 69 start-page: 46 year: 2014 ident: 10.1016/j.ins.2021.10.070_b0145 article-title: Grey wolf optimizer publication-title: Adv. Eng. Softw. doi: 10.1016/j.advengsoft.2013.12.007 – volume: 9 start-page: 727 issue: 3 year: 2010 ident: 10.1016/j.ins.2021.10.070_b0175 article-title: BGSA: binary gravitational search algorithm publication-title: Nat. Comput. doi: 10.1007/s11047-009-9175-3 – year: 2017 ident: 10.1016/j.ins.2021.10.070_b0005 – volume: 20 start-page: 1 issue: 3 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0085 article-title: Supervised-actor-critic reinforcement learning for intelligent mechanical ventilation and sedative dosing in intensive care units publication-title: BMC Med. Inf. Decis. Making – volume: 548 start-page: 233 year: 2021 ident: 10.1016/j.ins.2021.10.070_b0060 article-title: Membership-function-dependent stability analysis and local controller design for T-S fuzzy systems: a space-enveloping approach publication-title: Inf. Sci. doi: 10.1016/j.ins.2020.09.043 – volume: 43 start-page: 150 issue: 1 year: 2015 ident: 10.1016/j.ins.2021.10.070_b0155 article-title: How effective is the grey wolf optimizer in training multi-layer perceptrons publication-title: Appl. Intelligence doi: 10.1007/s10489-014-0645-7 – volume: 141 start-page: 1 issue: 10 year: 2019 ident: 10.1016/j.ins.2021.10.070_b0035 article-title: Multivariable D2-IBC and application to vehicle stability control publication-title: ASME J. Dyn. Syst., Meas. Control – volume: 97 start-page: 106766 issue: Part A year: 2020 ident: 10.1016/j.ins.2021.10.070_b0120 article-title: Fault tolerant tracking control for nonlinear systems with actuator failures through particle swarm optimization-based adaptive dynamic programming publication-title: Appl. Soft Comput. doi: 10.1016/j.asoc.2020.106766 – start-page: 553 year: 2012 ident: 10.1016/j.ins.2021.10.070_b0200 article-title: Brief introduction of Back Propagation (BP) neural network algorithm and its improvement – year: 2019 ident: 10.1016/j.ins.2021.10.070_b0090 – start-page: 65 year: 2012 ident: 10.1016/j.ins.2021.10.070_b0130 article-title: Integrating particle swarm optimization with reinforcement learning in noisy problems – ident: 10.1016/j.ins.2021.10.070_b0190 doi: 10.2514/6.2021-2563 – volume: 476 start-page: 159 year: 2019 ident: 10.1016/j.ins.2021.10.070_b0180 article-title: Interval type-2 fuzzy logic for dynamic parameter adaptation in a modified gravitational search algorithm publication-title: Inf. Sci. doi: 10.1016/j.ins.2018.10.025 – ident: 10.1016/j.ins.2021.10.070_b0230 – ident: 10.1016/j.ins.2021.10.070_b0110 – volume: 87 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0250 article-title: Community detection in networks using bio-inspired optimization: latest developments, new results and perspectives with a selection of recent meta-heuristics publication-title: Appl. Soft Comput. doi: 10.1016/j.asoc.2019.106010 – volume: 45 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0210 article-title: Reinforcement learning based optimizer for improvement of predicting tunneling-induced ground responses publication-title: Adv. Eng. Inf. doi: 10.1016/j.aei.2020.101097 – volume: 69 start-page: 4625 issue: 7 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0235 article-title: Evolving fuzzy models for prosthetic hand myoelectric-based control publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2020.2983531 – volume: 12 start-page: 748 issue: 6 year: 2017 ident: 10.1016/j.ins.2021.10.070_b0055 article-title: Fuzzy logic is not fuzzy: World-renowned computer scientist Lotfi A. Zadeh publication-title: Int. J. Comput. Commun. Control doi: 10.15837/ijccc.2017.6.3111 – volume: 28 start-page: 1542 issue: 8 year: 2020 ident: 10.1016/j.ins.2021.10.070_b0240 article-title: Generic evolving self-organizing neuro-fuzzy control of bio-inspired unmanned aerial vehicles publication-title: IEEE Trans. Fuzzy Syst. doi: 10.1109/TFUZZ.2019.2917808 – ident: 10.1016/j.ins.2021.10.070_b0095 |
| SSID | ssj0004766 |
| Score | 2.6617792 |
| Snippet | •A combination of Deep Q-Learning algorithm and metaheuristic GSA is offered.•GSA initializes the weights and the biases of the neural networks.•A comparison... |
| SourceID | crossref elsevier |
| SourceType | Enrichment Source Index Database Publisher |
| StartPage | 99 |
| SubjectTerms | Gravitational search algorithm NN training Optimal reference tracking control Q-learning Reinforcement learning Servo systems |
| Title | Reinforcement Learning-based control using Q-learning and gravitational search algorithm with experimental validation on a nonlinear servo system |
| URI | https://dx.doi.org/10.1016/j.ins.2021.10.070 |
| Volume | 583 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVESC databaseName: Baden-Württemberg Complete Freedom Collection (Elsevier) customDbUrl: eissn: 1872-6291 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0004766 issn: 0020-0255 databaseCode: GBLVA dateStart: 20110101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: Science Direct customDbUrl: eissn: 1872-6291 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0004766 issn: 0020-0255 databaseCode: .~1 dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: Science Direct customDbUrl: eissn: 1872-6291 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0004766 issn: 0020-0255 databaseCode: AIKHN dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVESC databaseName: ScienceDirect (Elsevier) customDbUrl: eissn: 1872-6291 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0004766 issn: 0020-0255 databaseCode: ACRLP dateStart: 19950101 isFulltext: true titleUrlDefault: https://www.sciencedirect.com providerName: Elsevier – providerCode: PRVLSH databaseName: Elsevier Journals customDbUrl: mediaType: online eissn: 1872-6291 dateEnd: 99991231 omitProxy: true ssIdentifier: ssj0004766 issn: 0020-0255 databaseCode: AKRWK dateStart: 19681201 isFulltext: true providerName: Library Specific Holdings |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3PS8MwFA6iFz2ITsVfG-8gHoTONnZJexRxTEVBcbBbSZN0VuY2xrz6P_gf-16a6gT1IPTSkKRpXpKXvLz3fYwd6YRrozRBIUoy3Uiccx0uAitR2SZWmNBxEdzeiV4_vh50Bkvsoo6FIbdKv_ZXa7pbrX3Kqe_N02lZUowvdztiPLREeEgZUAR7LInFoP325eYRy-q-ko5JlLu-2XQ-XuWYELt51CYHL-Ir_kk3Leib7gZb9xtFOK_assmW7LjB1hbgAxus6YMO4Bh8VBH1MvjpusXeH6zDRdXOBAgeSnUYkOYy4J3UgTzfh3AfeP6IIaixAWIl8ujd2IhqOoAaDSezcv70AmS9hUVyAMABW1b0TICPgnEFwaFmQGbfCVSI0dus3718vOgFnoIh0DyV80BFiudJaos412lkDNfCcpPnDtVGWQKM4zzXxhbY1cYmOb4pK7TGXaUM0-Jshy3jB-0ug5QLPA2FCiWisTaRS6w5EvIsVokwhdhjYd35mfZ_SDQZo6x2RHvOUF4ZyYuSUF577OSzyLQC5_grc1xLNPs2wjJUHr8X2_9fsQO2yilQwhlrDtnyfPZqm7h9mectNz5bbOX86qZ39wFcH_Jn |
| linkProvider | Elsevier |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8QwEB58HNSD-MS3cxAPQnWb7SbtUURZn6Ao7K2kSXat6CrLevU_-I-dSVNdQT0IvTQkaZpJMo_MfAOwY1JhrDYMhajYdKNoz7WEjJwiZps6aRs-F8HllWzfJWedVmcMjupYGHarDGd_dab70zqUHITZPHgpS47xFV4iJqUlJiWlMw6TSUso1sD23778PBJVXViynsTV66tN7-RV9hmyW8T77OHFCYt_Yk4jDOdkDmaDpIiH1WDmYcz1F2BmBD9wATZD1AHuYggr4mnGsF8X4f3GeWBU422AGLBUexGzLovBSx3Z9b2H11FIINFD3bfIaYkCfDcNotoPqB97z4NyeP-EbL7F0ewASCu2rPIzIT0a-xUGhx4g232fsYKMXoK7k-Pbo3YUcjBERmRqGOlYiyLNXDcpTBZbK4x0whaFh7XRjhHjhCiMdV2aauvSgt60k8aQWKkaWbe5DBP0QbcCmAlJ6lBDE0UM9SYLRT3HUjUTnUrblavQqCc_N-EPOU_GY157oj3kRK-c6cVFRK9V2Pts8lKhc_xVOakpmn9bYjlxj9-brf2v2TZMtW8vL_KL06vzdZgWHDXhLTcbMDEcvLpNkmWGxZZfqx_47PP8 |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Reinforcement+Learning-based+control+using+Q-learning+and+gravitational+search+algorithm+with+experimental+validation+on+a+nonlinear+servo+system&rft.jtitle=Information+sciences&rft.au=Zamfirache%2C+Iuliu+Alexandru&rft.au=Precup%2C+Radu-Emil&rft.au=Roman%2C+Raul-Cristian&rft.au=Petriu%2C+Emil+M.&rft.date=2022-01-01&rft.pub=Elsevier+Inc&rft.issn=0020-0255&rft.eissn=1872-6291&rft.volume=583&rft.spage=99&rft.epage=120&rft_id=info:doi/10.1016%2Fj.ins.2021.10.070&rft.externalDocID=S002002552101094X |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0020-0255&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0020-0255&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0020-0255&client=summon |