Distributed Neural Networks Training for Robotic Manipulation With Consensus Algorithm
In this article, we propose an algorithm that combines actor-critic-based off-policy method with consensus-based distributed training to deal with multiagent deep reinforcement learning problems. Specifically, convergence analysis of a consensus algorithm for a type of nonlinear system with a Lyapun...
Saved in:
| Published in | IEEE transaction on neural networks and learning systems Vol. 35; no. 2; pp. 2732 - 2746 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
01.02.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2162-237X 2162-2388 2162-2388 |
| DOI | 10.1109/TNNLS.2022.3191021 |
Cover
| Abstract | In this article, we propose an algorithm that combines actor-critic-based off-policy method with consensus-based distributed training to deal with multiagent deep reinforcement learning problems. Specifically, convergence analysis of a consensus algorithm for a type of nonlinear system with a Lyapunov method is developed, and we use this result to analyze the convergence properties of the actor training parameters and the critic training parameters in our algorithm. Through the convergence analysis, it can be verified that all agents will converge to the same optimal model as the training time goes to infinity. To validate the implementation of our algorithm, a multiagent training framework is proposed to train each Universal Robot 5 (UR5) robot arm to reach the random target position. Finally, experiments are provided to demonstrate the effectiveness and feasibility of the proposed algorithm. |
|---|---|
| AbstractList | In this article, we propose an algorithm that combines actor–critic-based off-policy method with consensus-based distributed training to deal with multiagent deep reinforcement learning problems. Specifically, convergence analysis of a consensus algorithm for a type of nonlinear system with a Lyapunov method is developed, and we use this result to analyze the convergence properties of the actor training parameters and the critic training parameters in our algorithm. Through the convergence analysis, it can be verified that all agents will converge to the same optimal model as the training time goes to infinity. To validate the implementation of our algorithm, a multiagent training framework is proposed to train each Universal Robot 5 (UR5) robot arm to reach the random target position. Finally, experiments are provided to demonstrate the effectiveness and feasibility of the proposed algorithm. In this article, we propose an algorithm that combines actor-critic-based off-policy method with consensus-based distributed training to deal with multiagent deep reinforcement learning problems. Specifically, convergence analysis of a consensus algorithm for a type of nonlinear system with a Lyapunov method is developed, and we use this result to analyze the convergence properties of the actor training parameters and the critic training parameters in our algorithm. Through the convergence analysis, it can be verified that all agents will converge to the same optimal model as the training time goes to infinity. To validate the implementation of our algorithm, a multiagent training framework is proposed to train each Universal Robot 5 (UR5) robot arm to reach the random target position. Finally, experiments are provided to demonstrate the effectiveness and feasibility of the proposed algorithm.In this article, we propose an algorithm that combines actor-critic-based off-policy method with consensus-based distributed training to deal with multiagent deep reinforcement learning problems. Specifically, convergence analysis of a consensus algorithm for a type of nonlinear system with a Lyapunov method is developed, and we use this result to analyze the convergence properties of the actor training parameters and the critic training parameters in our algorithm. Through the convergence analysis, it can be verified that all agents will converge to the same optimal model as the training time goes to infinity. To validate the implementation of our algorithm, a multiagent training framework is proposed to train each Universal Robot 5 (UR5) robot arm to reach the random target position. Finally, experiments are provided to demonstrate the effectiveness and feasibility of the proposed algorithm. |
| Author | Herrmann, Guido Niu, Hanlin Jang, Inmo Liu, Wenxing Carrasco, Joaquin |
| Author_xml | – sequence: 1 givenname: Wenxing orcidid: 0000-0002-3195-3862 surname: Liu fullname: Liu, Wenxing email: wenxing.liu@manchester.ac.uk organization: Department of Electrical and Electronic Engineering, The University of Manchester, Manchester, U.K – sequence: 2 givenname: Hanlin orcidid: 0000-0003-0457-0871 surname: Niu fullname: Niu, Hanlin email: hanlin.niu@manchester.ac.uk organization: Department of Electrical and Electronic Engineering, The University of Manchester, Manchester, U.K – sequence: 3 givenname: Inmo orcidid: 0000-0002-7492-3938 surname: Jang fullname: Jang, Inmo email: inmo.jang@manchester.ac.uk organization: Department of Electrical and Electronic Engineering, The University of Manchester, Manchester, U.K – sequence: 4 givenname: Guido surname: Herrmann fullname: Herrmann, Guido email: guido.herrmann@manchester.ac.uk organization: Department of Electrical and Electronic Engineering, The University of Manchester, Manchester, U.K – sequence: 5 givenname: Joaquin orcidid: 0000-0002-7499-6408 surname: Carrasco fullname: Carrasco, Joaquin email: joaquin.carrasco@manchester.ac.uk organization: Department of Electrical and Electronic Engineering, The University of Manchester, Manchester, U.K |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35853061$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kc1uEzEUha2qqH_0BUBCI7HpJsG-_hl7WQVakEKQILTsLGfGLi4TO9geVX173Cbtogu8uZb1fVfWOcdoP8RgEXpD8JQQrD4sF4v5jylggCklimAge-gIiIAJUCn3n-_tr0N0mvMtrkdgLpg6QIeUS06xIEfo6qPPJfnVWGzfLOyYzFBHuYvpT26Wyfjgw03jYmq-x1Usvmu-muA342CKj6G59uV3M4sh25DH3JwPNzHVp_Vr9MqZIdvT3TxBPy8-LWefJ_Nvl19m5_NJRzkpE2Z6ThQ4wShu-6530BFnhWsdYMMx693KcSmZoy1jrWoNsxIIV1yCFNT09ASdbfduUvw72lz02ufODoMJNo5Zg1CklYpzXtH3L9DbOKZQf6dBASWUtQCVerejxtXa9nqT_Nqke_2UWAXkFuhSzDlZpztfHsMoNa1BE6wf-tGP_eiHfvSun6rCC_Vp-3-lt1vJW2ufBSUpZQLTf3bNmmE |
| CODEN | ITNNAL |
| CitedBy_id | crossref_primary_10_1038_s41598_023_49266_z crossref_primary_10_1109_TITS_2024_3364356 crossref_primary_10_1002_acs_3958 crossref_primary_10_1007_s10846_024_02161_9 crossref_primary_10_1109_TIE_2024_3398674 |
| Cites_doi | 10.1109/TCSII.2021.3066555 10.1109/tnnls.2021.3107600 10.1007/BF00992698 10.1109/CDC40024.2019.9029969 10.1109/IEEECONF49454.2021.9382674 10.1109/TNNLS.2020.3044196 10.1017/CBO9781139020411 10.56021/9781421407944 10.1109/IROS40897.2019.8968488 10.1109/TNNLS.2021.3068762 10.1109/TVT.2020.3034800 10.1109/tnnls.2021.3059912 10.1109/TNNLS.2020.3047941 10.1109/IROS.2018.8593986 10.1007/s10846-021-01368-4 10.1109/tnnls.2021.3056046 10.1109/TSMC.2018.2883801 10.1515/9781400841042 10.1145/3133956.3133982 10.1109/TNNLS.2019.2955400 10.1016/j.sysconle.2007.01.002 10.1109/tnnls.2021.3054402 10.1109/TSMC.1983.6313077 10.1109/CDC.2018.8619581 10.1007/s11432-019-2714-7 10.1109/TAC.2009.2037462 10.1109/TSMCC.2012.2218595 10.1109/TAC.2004.834113 10.1109/IROS.2004.1389727 10.1007/s12555-018-0666-9 10.1016/B978-1-55860-307-3.50049-6 10.1007/978-1-4615-3618-5_1 10.1109/tnn.1998.712192 10.1109/IROS40897.2019.8967834 10.1109/TNNLS.2021.3057424 10.1609/aaai.v32i1.11794 10.1109/ICRA.2018.8460756 10.1371/journal.pone.0172395 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
| DOI | 10.1109/TNNLS.2022.3191021 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library (IEL) CrossRef PubMed Aluminium Industry Abstracts Biotechnology Research Abstracts Calcium & Calcified Tissue Abstracts Ceramic Abstracts Chemoreception Abstracts Computer and Information Systems Abstracts Corrosion Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts Materials Business File Mechanical & Transportation Engineering Abstracts Neurosciences Abstracts Solid State and Superconductivity Abstracts METADEX Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database Materials Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Materials Research Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Materials Business File Aerospace Database Engineered Materials Abstracts Biotechnology Research Abstracts Chemoreception Abstracts Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Civil Engineering Abstracts Aluminium Industry Abstracts Electronics & Communications Abstracts Ceramic Abstracts Neurosciences Abstracts METADEX Biotechnology and BioEngineering Abstracts Computer and Information Systems Abstracts Professional Solid State and Superconductivity Abstracts Engineering Research Database Calcium & Calcified Tissue Abstracts Corrosion Abstracts MEDLINE - Academic |
| DatabaseTitleList | Materials Research Database MEDLINE - Academic PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Xplore Digital Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2162-2388 |
| EndPage | 2746 |
| ExternalDocumentID | 35853061 10_1109_TNNLS_2022_3191021 9833460 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Robotics and Artificial Intelligence for Nuclear (RAIN) hub grantid: EP/R026084/1 – fundername: Engineering and Physical Sciences Research Council (EPSRC) grantid: EP/S03286X/1 funderid: 10.13039/501100000266 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION NPM RIG 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
| ID | FETCH-LOGICAL-c351t-4ad5192f64307dcdf2c1fe6f7f20a504dfbf5884f3744797a4e82159582863ad3 |
| IEDL.DBID | RIE |
| ISSN | 2162-237X 2162-2388 |
| IngestDate | Sun Sep 28 10:29:24 EDT 2025 Mon Jun 30 05:36:02 EDT 2025 Thu Jul 24 03:21:18 EDT 2025 Thu Apr 24 23:03:49 EDT 2025 Wed Oct 01 00:45:09 EDT 2025 Wed Aug 27 02:07:44 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Issue | 2 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c351t-4ad5192f64307dcdf2c1fe6f7f20a504dfbf5884f3744797a4e82159582863ad3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-7492-3938 0000-0003-0457-0871 0000-0002-7499-6408 0000-0002-3195-3862 |
| PMID | 35853061 |
| PQID | 2923134722 |
| PQPubID | 85436 |
| PageCount | 15 |
| ParticipantIDs | crossref_citationtrail_10_1109_TNNLS_2022_3191021 ieee_primary_9833460 proquest_miscellaneous_2691789555 proquest_journals_2923134722 pubmed_primary_35853061 crossref_primary_10_1109_TNNLS_2022_3191021 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2024-02-01 |
| PublicationDateYYYYMMDD | 2024-02-01 |
| PublicationDate_xml | – month: 02 year: 2024 text: 2024-02-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: Piscataway |
| PublicationTitle | IEEE transaction on neural networks and learning systems |
| PublicationTitleAbbrev | TNNLS |
| PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref14 Foerster (ref10) ref11 ref17 Sukhbaatar (ref12); 29 ref16 ref19 ref18 Watkins (ref32) 1992; 8 Nair (ref47) ref51 ref42 Golovin (ref13) ref41 ref44 ref43 ref49 ref8 ref7 Maei (ref45) ref9 ref4 ref3 ref6 ref5 Sutton (ref31); 12 ref40 ref34 ref37 ref36 ref30 ref33 ref2 ref1 Konda (ref39) Degris (ref46) 2012 ref24 ref23 ref26 ref25 ref20 ref22 ref21 Zhu (ref15) Peng (ref50) 2017 ref28 ref27 Lewis (ref35) 2013 ref29 Quigley (ref48); 3 Silver (ref38) |
| References_xml | – ident: ref20 doi: 10.1109/TCSII.2021.3066555 – start-page: 2137 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref10 article-title: Learning to communicate with deep multi-agent reinforcement learning – ident: ref16 doi: 10.1109/tnnls.2021.3107600 – volume: 8 start-page: 279 issue: 3 year: 1992 ident: ref32 article-title: Q-learning publication-title: Mach. Learn. doi: 10.1007/BF00992698 – ident: ref33 doi: 10.1109/CDC40024.2019.9029969 – ident: ref2 doi: 10.1109/IEEECONF49454.2021.9382674 – ident: ref26 doi: 10.1109/TNNLS.2020.3044196 – ident: ref42 doi: 10.1017/CBO9781139020411 – ident: ref43 doi: 10.56021/9781421407944 – ident: ref4 doi: 10.1109/IROS40897.2019.8968488 – ident: ref5 doi: 10.1109/TNNLS.2021.3068762 – start-page: 1 volume-title: Proc. ICML ident: ref47 article-title: Rectified linear units improve restricted Boltzmann machines – start-page: 1 volume-title: Proc. ICML ident: ref45 article-title: Toward off-policy learning control with function approximation – ident: ref8 doi: 10.1109/TVT.2020.3034800 – ident: ref23 doi: 10.1109/tnnls.2021.3059912 – ident: ref27 doi: 10.1109/TNNLS.2020.3047941 – ident: ref3 doi: 10.1109/IROS.2018.8593986 – ident: ref18 doi: 10.1007/s10846-021-01368-4 – ident: ref25 doi: 10.1109/tnnls.2021.3056046 – ident: ref29 doi: 10.1109/TSMC.2018.2883801 – ident: ref44 doi: 10.1515/9781400841042 – ident: ref14 doi: 10.1145/3133956.3133982 – ident: ref22 doi: 10.1109/TNNLS.2019.2955400 – start-page: 387 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref38 article-title: Deterministic policy gradient algorithms – year: 2012 ident: ref46 article-title: Off-policy actor-critic publication-title: arXiv:1205.4839 – ident: ref21 doi: 10.1016/j.sysconle.2007.01.002 – volume-title: Cooperative Control of Multi-Agent Systems: Optimal and Adaptive Design Approaches year: 2013 ident: ref35 – ident: ref7 doi: 10.1109/tnnls.2021.3054402 – ident: ref37 doi: 10.1109/TSMC.1983.6313077 – start-page: 3837 volume-title: Proc. Int. Conf. Artif. Intell. Statist. ident: ref15 article-title: Federated heavy hitters discovery with differential privacy – ident: ref36 doi: 10.1109/CDC.2018.8619581 – ident: ref17 doi: 10.1007/s11432-019-2714-7 – ident: ref30 doi: 10.1109/TAC.2009.2037462 – ident: ref41 doi: 10.1109/TSMCC.2012.2218595 – ident: ref34 doi: 10.1109/TAC.2004.834113 – ident: ref49 doi: 10.1109/IROS.2004.1389727 – volume: 12 start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref31 article-title: Policy gradient methods for reinforcement learning with function approximation – volume: 29 start-page: 1 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref12 article-title: Learning multiagent communication with backpropagation – start-page: 1008 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref39 article-title: Actor-critic algorithms – ident: ref19 doi: 10.1007/s12555-018-0666-9 – ident: ref9 doi: 10.1016/B978-1-55860-307-3.50049-6 – ident: ref28 doi: 10.1007/978-1-4615-3618-5_1 – volume: 3 start-page: 5 volume-title: Proc. ICRA Workshop Open Source Softw. ident: ref48 article-title: ROS: An open-source robot operating system – ident: ref40 doi: 10.1109/tnn.1998.712192 – ident: ref6 doi: 10.1109/IROS40897.2019.8967834 – start-page: 325 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref13 article-title: Large-scale learning with less ram via randomization – ident: ref24 doi: 10.1109/TNNLS.2021.3057424 – ident: ref51 doi: 10.1609/aaai.v32i1.11794 – year: 2017 ident: ref50 article-title: Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play StarCraft combat games publication-title: arXiv:1703.10069 – ident: ref1 doi: 10.1109/ICRA.2018.8460756 – ident: ref11 doi: 10.1371/journal.pone.0172395 |
| SSID | ssj0000605649 |
| Score | 2.5088024 |
| Snippet | In this article, we propose an algorithm that combines actor-critic-based off-policy method with consensus-based distributed training to deal with multiagent... In this article, we propose an algorithm that combines actor–critic-based off-policy method with consensus-based distributed training to deal with multiagent... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 2732 |
| SubjectTerms | Algorithms Consensus Convergence deep reinforcement learning Lyapunov methods Machine learning manipulator Manipulators Multiagent systems Neural networks Nonlinear systems Parameters Privacy Reinforcement learning Robot arms Robot kinematics Task analysis Training |
| Title | Distributed Neural Networks Training for Robotic Manipulation With Consensus Algorithm |
| URI | https://ieeexplore.ieee.org/document/9833460 https://www.ncbi.nlm.nih.gov/pubmed/35853061 https://www.proquest.com/docview/2923134722 https://www.proquest.com/docview/2691789555 |
| Volume | 35 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore Digital Library customDbUrl: eissn: 2162-2388 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000605649 issn: 2162-237X databaseCode: RIE dateStart: 20120101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VHlAvFCiP0BYZiRtkGz9ir48VpaoQuwfYwt6ixA-oKAnqZi_8-s7kJYEAcYsS23EyY_kbz-MDeGmk9iYECpXiFg0U5dJSKZO6yvksKKMcp2zkxVJfXKp363y9A6-nXJgQQhd8FmZ02fnyfeO2dFR2YudSKo0G-h0z132u1nSekiEu1x3aFVyLVEizHnNkMnuyWi7ff0RrUAg0Ui3RWe_BXYlQGREz_2VL6jhW_g43u23nfB8W44T7aJNvs21bzdzP32o5_u8X3Yd7A_5kp73CPICdUD-E_ZHbgQ1L_QA-nVFFXSLDCp5RBQ_stOxDxjdsNfBKMES87ENTNTgYW5T11UgGxj5ftV8ZkYESk8aGnV5_aW7w1vdHcHn-dvXmIh04GFInc96mqvSI8URE4JIZ73wUjsego4kiK_NM-VhFynWN0qCIrSlVmCOKsOSN07L08jHs1k0dngKTwaP1Zp2zKirEqZXx1pmACMJn3EueAB_FULihQDnxZFwXnaGS2aKTYkFSLAYpJvBq6vOjL8_xz9YHJIKp5fD3EzgapV0MK3hTCEK-kkppJvBieoxrjxwqZR2aLbbRaOzObZ7nCTzptWQae1SuZ39-5yHs4cxUH_99BLvtzTYcI7xpq-edXt8CaVzyuw |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5VRYJeKFAoaQsYiRtkGz8Sr48VUC2wmwNsYW9RYjtQtU2qbvbCr-9MXhIIELcosR0nM5a_8Tw-gFdaJk57T6FS3KCBomyYK6VDW1gXeaWV5ZSNvEiT2Zn6uIpXW_BmzIXx3rfBZ35Cl60v39V2Q0dlx2YqpUrQQL8TK6XiLltrPFGJEJknLd4VPBGhkHo1ZMlE5niZpvMvaA8KgWaqIULrHbgrESwjZua_bEoty8rfAWe78ZzuwmKYchdvcjHZNMXE_vytmuP_ftMDuN8jUHbSqcxD2PLVI9gd2B1Yv9j34Os7qqlLdFjeMarhgZ3SLmh8zZY9swRDzMs-10WNg7FFXp0PdGDs23nzgxEdKHFprNnJ5ff6Bm9dPYaz0_fLt7OwZ2EIrYx5E6rcIcoTJUKXSDvrSmF56ZNSlyLK40i5sigp27WUGoVsdK78FHGEIX9cInMnn8B2VVf-KTDpHdpvxlqjSoVItdDOWO0RQ7iIO8kD4IMYMtuXKCemjMusNVUik7VSzEiKWS_FAF6Pfa67Ah3_bL1HIhhb9n8_gKNB2lm_hteZIOwrqZhmAC_Hx7j6yKWSV77eYJsEzd2pieM4gP1OS8axB-U6-PM7X8C92XIxz-Yf0k-HsIOzVF00-BFsNzcb_wzBTlM8b3X8FkYW9gg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Distributed+Neural+Networks+Training+for+Robotic+Manipulation+With+Consensus+Algorithm&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Liu%2C+Wenxing&rft.au=Niu%2C+Hanlin&rft.au=Jang%2C+Inmo&rft.au=Herrmann%2C+Guido&rft.date=2024-02-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=35&rft.issue=2&rft.spage=2732&rft_id=info:doi/10.1109%2FTNNLS.2022.3191021&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |