Fusion inception and transformer network for continuous estimation of finger kinematics from surface electromyography

Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (F...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neurorobotics Vol. 18; p. 1305605
Main Authors Lin, Chuang, Zhang, Xiaobing
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 03.05.2024
Subjects
Online AccessGet full text
ISSN1662-5218
1662-5218
DOI10.3389/fnbot.2024.1305605

Cover

Abstract Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (FIT), which effectively models both local and global information on sequence data by fully leveraging the capabilities of Inception and Transformer networks. In the publicly available Ninapro dataset, we selected surface EMG signals from six typical hand grasping maneuvers in 10 subjects for predicting the values of the 10 most important joint angles in the hand. Our model’s performance, assessed through Pearson’s correlation coefficient (PCC), root mean square error (RMSE), and R-squared (R 2 ) metrics, was compared with temporal convolutional network (TCN), long short-term memory network (LSTM), and bidirectional encoder representation from transformers model (BERT). Additionally, we also calculate the training time and the inference time of the models. The results show that FIT is the most performant, with excellent estimation accuracy and low computational cost. Our model contributes to the development of HCI technology and has significant practical value.
AbstractList Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (FIT), which effectively models both local and global information on sequence data by fully leveraging the capabilities of Inception and Transformer networks. In the publicly available Ninapro dataset, we selected surface EMG signals from six typical hand grasping maneuvers in 10 subjects for predicting the values of the 10 most important joint angles in the hand. Our model’s performance, assessed through Pearson’s correlation coefficient (PCC), root mean square error (RMSE), and R-squared (R 2 ) metrics, was compared with temporal convolutional network (TCN), long short-term memory network (LSTM), and bidirectional encoder representation from transformers model (BERT). Additionally, we also calculate the training time and the inference time of the models. The results show that FIT is the most performant, with excellent estimation accuracy and low computational cost. Our model contributes to the development of HCI technology and has significant practical value.
Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (FIT), which effectively models both local and global information on sequence data by fully leveraging the capabilities of Inception and Transformer networks. In the publicly available Ninapro dataset, we selected surface EMG signals from six typical hand grasping maneuvers in 10 subjects for predicting the values of the 10 most important joint angles in the hand. Our model's performance, assessed through Pearson's correlation coefficient (PCC), root mean square error (RMSE), and R-squared (R ) metrics, was compared with temporal convolutional network (TCN), long short-term memory network (LSTM), and bidirectional encoder representation from transformers model (BERT). Additionally, we also calculate the training time and the inference time of the models. The results show that FIT is the most performant, with excellent estimation accuracy and low computational cost. Our model contributes to the development of HCI technology and has significant practical value.
Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (FIT), which effectively models both local and global information on sequence data by fully leveraging the capabilities of Inception and Transformer networks. In the publicly available Ninapro dataset, we selected surface EMG signals from six typical hand grasping maneuvers in 10 subjects for predicting the values of the 10 most important joint angles in the hand. Our model's performance, assessed through Pearson's correlation coefficient (PCC), root mean square error (RMSE), and R-squared (R2) metrics, was compared with temporal convolutional network (TCN), long short-term memory network (LSTM), and bidirectional encoder representation from transformers model (BERT). Additionally, we also calculate the training time and the inference time of the models. The results show that FIT is the most performant, with excellent estimation accuracy and low computational cost. Our model contributes to the development of HCI technology and has significant practical value.Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (FIT), which effectively models both local and global information on sequence data by fully leveraging the capabilities of Inception and Transformer networks. In the publicly available Ninapro dataset, we selected surface EMG signals from six typical hand grasping maneuvers in 10 subjects for predicting the values of the 10 most important joint angles in the hand. Our model's performance, assessed through Pearson's correlation coefficient (PCC), root mean square error (RMSE), and R-squared (R2) metrics, was compared with temporal convolutional network (TCN), long short-term memory network (LSTM), and bidirectional encoder representation from transformers model (BERT). Additionally, we also calculate the training time and the inference time of the models. The results show that FIT is the most performant, with excellent estimation accuracy and low computational cost. Our model contributes to the development of HCI technology and has significant practical value.
Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of human computer interaction (HCI). In this paper, we present a novel deep learning (DL) model, named fusion inception and transformer network (FIT), which effectively models both local and global information on sequence data by fully leveraging the capabilities of Inception and Transformer networks. In the publicly available Ninapro dataset, we selected surface EMG signals from six typical hand grasping maneuvers in 10 subjects for predicting the values of the 10 most important joint angles in the hand. Our model’s performance, assessed through Pearson’s correlation coefficient (PCC), root mean square error (RMSE), and R-squared (R2) metrics, was compared with temporal convolutional network (TCN), long short-term memory network (LSTM), and bidirectional encoder representation from transformers model (BERT). Additionally, we also calculate the training time and the inference time of the models. The results show that FIT is the most performant, with excellent estimation accuracy and low computational cost. Our model contributes to the development of HCI technology and has significant practical value.
Author Zhang, Xiaobing
Lin, Chuang
AuthorAffiliation School of Information Science and Technology, Dalian Maritime University , Dalian , China
AuthorAffiliation_xml – name: School of Information Science and Technology, Dalian Maritime University , Dalian , China
Author_xml – sequence: 1
  givenname: Chuang
  surname: Lin
  fullname: Lin, Chuang
– sequence: 2
  givenname: Xiaobing
  surname: Zhang
  fullname: Zhang, Xiaobing
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38765870$$D View this record in MEDLINE/PubMed
BookMark eNp9Uk1vFDEMHaEi-gF_gAPKkctunUySyZwQqihUqsQFzlEmcbZpZ5MlmQHtvyf7QdVy6CmO_d6zZb_z5iSmiE3znsKybVV_6eOQpiUDxpe0BSFBvGrOqJRsIRhVJ0_i0-a8lHsAyaRQb5rTVnU16OCsma_nElIkIVrcTLvIREembGLxKa8xk4jTn5QfSP0Sm-IU4pzmQrBMYW32jOSJD3FVsQ8h4i5pC_E5rUmZszcWCY5op5rYplU2m7vt2-a1N2PBd8f3ovl5_eXH1bfF7fevN1efbxeWy35adIOxvQTXyl5aIUD1vQBoByZgkMPgrUJHQfhOMN52HTUKvAflpGPQMYftRXNz0HXJ3OtNrhPnrU4m6H0i5ZU2uY47oq6agH1lUc45yN4IFJZyal0_WDSuan06aG3mYY3OYqxbGp-JPq_EcKdX6bemlAJwKqrCx6NCTr_mukC9DsXiOJqIdaW63rCDjivJK_TD02aPXf4drgLYAWBzKiWjf4RQ0Dt36L079M4d-uiOSlL_kWyY9jesA4fxJepflKnEAA
CitedBy_id crossref_primary_10_1016_j_inffus_2024_102697
crossref_primary_10_3389_fnbot_2024_1499703
Cites_doi 10.1186/1743-0003-8-29
10.1088/1741-2552/abd461
10.1007/978-1-4842-5364-9_2
10.1162/neco.1997.9.8.1735
10.1038/nature14539
10.1109/mci.2014.2307227
10.1016/0364-0213(90)90002-E
10.1016/j.bspc.2019.02.011
10.3389/fnbot.2016.00007
10.1016/j.jelekin.2009.01.008
10.1109/SMC.2017.8122854
10.1109/EMBC.2015.7318567
10.4172/2168-9695.1000e107
10.1016/j.bspc.2023.105030
10.1109/ICASSP.2019.8683239
10.1109/ISR50024.2021.9419532
10.1162/neco.1989.1.4.541
10.1109/TASLP.2021.3125143
10.1109/TNSRE.2014.2328495
10.1016/j.patrec.2019.07.021
10.1109/JAS.2021.1003865
10.1002/mus.21055
10.1109/COASE.2019.8843168
10.3390/app11104678
10.3389/fnins.2021.621885
10.3389/fphys.2017.00985
ContentType Journal Article
Copyright Copyright © 2024 Lin and Zhang.
Copyright © 2024 Lin and Zhang. 2024 Lin and Zhang
Copyright_xml – notice: Copyright © 2024 Lin and Zhang.
– notice: Copyright © 2024 Lin and Zhang. 2024 Lin and Zhang
DBID AAYXX
CITATION
NPM
7X8
5PM
DOA
DOI 10.3389/fnbot.2024.1305605
DatabaseName CrossRef
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Open Access Full Text
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList CrossRef
PubMed

MEDLINE - Academic

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1662-5218
ExternalDocumentID oai_doaj_org_article_2500e972d1444069a5e5c141cd9bcead
PMC11100415
38765870
10_3389_fnbot_2024_1305605
Genre Journal Article
GroupedDBID ---
29H
2WC
53G
5GY
5VS
88I
8FE
8FH
9T4
AAFWJ
AAKPC
AAYXX
ABUWG
ACGFS
ACXDI
ADBBV
ADDVE
ADMLS
ADRAZ
AEGXH
AENEX
AFKRA
AFPKN
ALMA_UNASSIGNED_HOLDINGS
AOIJS
ARCSS
AZQEC
BAWUL
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
CCPQU
CITATION
CS3
DIK
DWQXO
E3Z
F5P
GNUQQ
GROUPED_DOAJ
GX1
HCIFZ
HYE
KQ8
LK8
M2P
M48
M7P
M~E
O5R
O5S
OK1
OVT
PGMZT
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
RNS
RPM
TR2
C1A
IAO
IEA
IHR
IPNFZ
ISR
NPM
RIG
7X8
PQGLB
PUEGO
5PM
ID FETCH-LOGICAL-c469t-7bac960d3696c5508995003b250b6bbfc8ed105f75243771a80ff08d6d2072de3
IEDL.DBID M48
ISSN 1662-5218
IngestDate Wed Aug 27 00:27:06 EDT 2025
Thu Aug 21 18:35:34 EDT 2025
Thu Sep 04 19:15:07 EDT 2025
Wed Feb 19 02:05:46 EST 2025
Tue Jul 01 02:32:24 EDT 2025
Thu Apr 24 22:59:04 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords deep learning
human-computer interaction
continuous estimation
finger kinematics
surface electromyography
Language English
License Copyright © 2024 Lin and Zhang.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c469t-7bac960d3696c5508995003b250b6bbfc8ed105f75243771a80ff08d6d2072de3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Maarten Ottenhoff, Maastricht University, Netherlands
Li Li, Wuhan University, China
Edited by: Chenyun Dai, Shanghai Jiao Tong University, China
Reviewed by: Gan Huang, Shenzhen University, China
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.3389/fnbot.2024.1305605
PMID 38765870
PQID 3057074864
PQPubID 23479
ParticipantIDs doaj_primary_oai_doaj_org_article_2500e972d1444069a5e5c141cd9bcead
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11100415
proquest_miscellaneous_3057074864
pubmed_primary_38765870
crossref_primary_10_3389_fnbot_2024_1305605
crossref_citationtrail_10_3389_fnbot_2024_1305605
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-05-03
PublicationDateYYYYMMDD 2024-05-03
PublicationDate_xml – month: 05
  year: 2024
  text: 2024-05-03
  day: 03
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
PublicationTitle Frontiers in neurorobotics
PublicationTitleAlternate Front Neurorobot
PublicationYear 2024
Publisher Frontiers Media S.A
Publisher_xml – name: Frontiers Media S.A
References Li (ref22) 2021; 15
Cipriani (ref11) 2011; 8
LeCun (ref20) 2015; 521
Ortiz-Catalan (ref26) 2015
Elman (ref14) 1990; 14
Bi (ref6) 2019; 51
Guo (ref15) 2021; 18
Hochreiter (ref16) 1997; 9
Côté-Allard (ref12) 2017
Kim (ref19) 2016
Atzori (ref3) 2015; 23
Artemiadis (ref2) 2012; 1
Xiong (ref32) 2021; 8
Chen (ref9) 2021; 11
Devlin (ref13) 2018
Chen (ref10) 2023; 85
Liu (ref24) 2019
Meekins (ref25) 2008; 38
Kapandjl (ref17) 1971; 50
Bai (ref4) 2018
Ketkar (ref18) 2021
Tsinganos (ref29) 2019
Simão (ref27) 2019; 128
Szegedy (ref28) 2015
Arabadzhiev (ref1) 2010; 20
Chadwell (ref8) 2016; 10
Cambria (ref7) 2014; 9
Vaswani (ref30) 2017
Bai (ref5) 2021
LeCun (ref21) 1989; 1
Lin (ref23) 2021; 29
Vigotsky (ref31) 2018; 8
References_xml – volume: 8
  start-page: 29
  year: 2011
  ident: ref11
  article-title: The smart hand transradial prosthesis
  publication-title: J. Neuro Eng. Rehab.
  doi: 10.1186/1743-0003-8-29
– start-page: 6000
  volume-title: “Attention is all you need,” in 31st International Conference on Neural Information Processing Systems (NIPS)
  year: 2017
  ident: ref30
– volume: 18
  start-page: 026027
  year: 2021
  ident: ref15
  article-title: Long exposure convolutional memory network for accurate estimation of finger kinematics from surface electromyographic signals
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/abd461
– start-page: 27
  volume-title: "introduction to PyTorch, " in deep learning with python: Learn best practices of deep learning models with PyTorch
  year: 2021
  ident: ref18
  doi: 10.1007/978-1-4842-5364-9_2
– volume: 9
  start-page: 1735
  year: 1997
  ident: ref16
  article-title: Long short-term memory
  publication-title: Neural Comput.
  doi: 10.1162/neco.1997.9.8.1735
– volume: 50
  start-page: 96
  year: 1971
  ident: ref17
  article-title: The physiology of the joints, volume I, upper limb
  publication-title: Am. J. Phys. Med. Rehabil.
– volume: 521
  start-page: 436
  year: 2015
  ident: ref20
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
– volume: 9
  start-page: 48
  year: 2014
  ident: ref7
  article-title: Jumping NLP curves: a review of natural language processing research [review article]
  publication-title: IEEE Comput. Intell. Mag.
  doi: 10.1109/mci.2014.2307227
– volume: 14
  start-page: 179
  year: 1990
  ident: ref14
  article-title: Finding structure in time
  publication-title: Cogn. Sci.
  doi: 10.1016/0364-0213(90)90002-E
– volume: 51
  start-page: 113
  year: 2019
  ident: ref6
  article-title: A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration
  publication-title: Biomed. Signal Process. Control
  doi: 10.1016/j.bspc.2019.02.011
– volume: 10
  start-page: 7
  year: 2016
  ident: ref8
  article-title: The reality of myoelectric prostheses: understanding what makes these devices difficult for some users to control
  publication-title: Front. Neurorobot.
  doi: 10.3389/fnbot.2016.00007
– start-page: 1063
  volume-title: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  year: 2015
  ident: ref28
  article-title: Rethinking the inception architecture for computer vision
– start-page: 83
  volume-title: In: 2016 13th international conference on ubiquitous robots and ambient intelligence (URAI)
  year: 2016
  ident: ref19
  article-title: Development of a wearable HCI controller through sEMG & IMU sensor fusion
– volume: 20
  start-page: 223
  year: 2010
  ident: ref1
  article-title: Interpretation of EMG integral or RMS and estimates of “neuromuscular efficiency” can be misleading in fatiguing contraction
  publication-title: J. Electromyogr. Kinesiol.
  doi: 10.1016/j.jelekin.2009.01.008
– start-page: 1663
  volume-title: 2017 IEEE international conference on systems, man, and cybernetics (SMC)
  year: 2017
  ident: ref12
  article-title: Transfer learning for sEMG hand gestures recognition using convolutional neural networks
  doi: 10.1109/SMC.2017.8122854
– start-page: 1140
  volume-title: 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC)
  year: 2015
  ident: ref26
  article-title: Offline accuracy: a potentially misleading metric in myoelectric pattern recognition for prosthetic control
  doi: 10.1109/EMBC.2015.7318567
– start-page: 1810.04805
  year: 2018
  ident: ref13
– volume: 1
  start-page: 1
  year: 2012
  ident: ref2
  article-title: EMG-based robot control interfaces: past, present and future
  publication-title: Adv. Robot. Automat.
  doi: 10.4172/2168-9695.1000e107
– volume: 85
  start-page: 105030
  year: 2023
  ident: ref10
  article-title: Continuous motion finger joint angle estimation utilizing hybrid sEMG-FMG modality driven transformer-based deep learning model
  publication-title: Biomed. Signal Process. Control
  doi: 10.1016/j.bspc.2023.105030
– start-page: 1169
  volume-title: ICASSP 2019–2019 IEEE international conference on acoustics, speech and signal processing (ICASSP)
  year: 2019
  ident: ref29
  article-title: Improved gesture recognition based on sEMG signals and TCN
  doi: 10.1109/ICASSP.2019.8683239
– start-page: 111
  volume-title: 2021 IEEE international conference on intelligence and safety for robotics (ISR)
  year: 2021
  ident: ref5
  article-title: Multi-Channel sEMG signal gesture recognition based on improved CNN-LSTM hybrid models
  doi: 10.1109/ISR50024.2021.9419532
– volume: 1
  start-page: 541
  year: 1989
  ident: ref21
  article-title: Backpropagation applied to handwritten zip code recognition
  publication-title: Neural Comput.
  doi: 10.1162/neco.1989.1.4.541
– volume: 29
  start-page: 3440
  year: 2021
  ident: ref23
  article-title: Speech enhancement using multi-stage self-attentive temporal convolutional networks
  publication-title: IEEE/ACM Transact. Audio Speech Lang. Proces.
  doi: 10.1109/TASLP.2021.3125143
– volume: 23
  start-page: 73
  year: 2015
  ident: ref3
  article-title: Characterization of a benchmark database for myoelectric movement classification
  publication-title: IEEE Trans. Neural Syst. Rehabil. Eng.
  doi: 10.1109/TNSRE.2014.2328495
– volume: 128
  start-page: 45
  year: 2019
  ident: ref27
  article-title: EMG-based online classification of gestures with recurrent neural networks
  publication-title: Pattern Recogn. Lett.
  doi: 10.1016/j.patrec.2019.07.021
– volume: 8
  start-page: 512
  year: 2021
  ident: ref32
  article-title: Deep learning for EMG-based human-machine interaction: a review
  publication-title: IEEE/CAA J. Automatica Sinica
  doi: 10.1109/JAS.2021.1003865
– start-page: 1803.01271
  volume-title: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling
  year: 2018
  ident: ref4
– volume: 38
  start-page: 1219
  year: 2008
  ident: ref25
  article-title: American Association of Neuromuscular & electrodiagnostic medicine evidenced-based review: use of surface electromyography in the diagnosis and study of neuromuscular disorders
  publication-title: Muscle Nerve
  doi: 10.1002/mus.21055
– start-page: 140
  volume-title: 2019 IEEE 15th international conference on automation science and engineering (CASE)
  year: 2019
  ident: ref24
  article-title: sEMG-based continuous estimation of knee joint angle using deep learning with convolutional neural network
  doi: 10.1109/COASE.2019.8843168
– volume: 11
  start-page: 4678
  year: 2021
  ident: ref9
  article-title: sEMG-based continuous estimation of finger kinematics via large-scale temporal convolutional network
  publication-title: Appl. Sci.
  doi: 10.3390/app11104678
– volume: 15
  start-page: 621885
  year: 2021
  ident: ref22
  article-title: Gesture recognition using surface electromyography and deep learning for prostheses hand: state-of-the-art, challenges, and future
  publication-title: Front. Neurosci.
  doi: 10.3389/fnins.2021.621885
– volume: 8
  start-page: 985
  year: 2018
  ident: ref31
  article-title: Interpreting signal amplitudes in surface electromyography studies in sport and rehabilitation sciences
  publication-title: Front. Physiol.
  doi: 10.3389/fphys.2017.00985
SSID ssj0062658
Score 2.3241827
Snippet Decoding surface electromyography (sEMG) to recognize human movement intentions enables us to achieve stable, natural and consistent control in the field of...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage 1305605
SubjectTerms continuous estimation
deep learning
finger kinematics
human-computer interaction
Neuroscience
surface electromyography
SummonAdditionalLinks – databaseName: DOAJ Open Access Full Text
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LT-MwELYQJzggXruEl4zEDUXEidM4R0BUCAlOVOrN8lNUu-ui0hz498zYadWuVuyFY562Zsae-ZKZbwi55I1m2giTl16VOVe8yJV1One6aXXjbakZ1js_PQ8eRvxxXI9XWn1hTliiB06CuwYXXbi2KS1E_lilqWpXG8aZsa02IAbcfYu2WICptAdDlF6LVCIDEKy99kFPMXGy5Nj9GJx8veaGIlv_v0LMvzMlV1zPcJfs9DEjvUlz3SMbLuyT7RUmwQPSDTv87EUnoU9ToSpYOl9EpW5GQ8r3pnBIMT99EjoA_RRJNlL1Ip166uNHPvoLXhypXN8plp_Q927mlXG075nz56PnuT4ko-H9y91D3ndUyA3A4HneaGUAslhs4mcAmwDYAvFWGoSsB1p7I5yFgMs3NfIUNkyJwvtC2IEtC1CBq36QzTAN7ohQx4UXlRdOWMNbjVRUhhXGWw8AqzU2I2whYGl6unHsevFbAuxApcioFIlKkb1SMnK1fOYtkW18efct6m15JxJlxxNgPrI3H_k_88nIxULrEhYW_i1RwYH8JQzSQHwlBjwjP5MVLIeqwIfUsNNlRKzZx9pc1q-EyWsk72aRo4_Vx98x-xOyhRKJCZjVKdmczzp3BkHSXJ_H9fAJAfUU3g
  priority: 102
  providerName: Directory of Open Access Journals
Title Fusion inception and transformer network for continuous estimation of finger kinematics from surface electromyography
URI https://www.ncbi.nlm.nih.gov/pubmed/38765870
https://www.proquest.com/docview/3057074864
https://pubmed.ncbi.nlm.nih.gov/PMC11100415
https://doaj.org/article/2500e972d1444069a5e5c141cd9bcead
Volume 18
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6V9gKHijfhsTISNxSIs07iHBCiqEuFRIUQK-0til9lRXFKdiPRf8-M4111UUFcIuXpxOPJzGfPfAPwQlSKKy11mrs2T0UrsrQ1VqVWVbWqnMkVp3znT6flyVx8XBSLPdiUO4oduLoW2lE9qXl__urXz8u3qPBvCHGivX3tvOooLDIXVNsYTXhxAw7CehGF8ontqgL67qFeJy9LAmBcjkk0f3nGjqEKfP7XOaF_xlJeMU6z23AYvUr2bhwGd2DP-rtw6wrX4D0YZgNNjLGlj4EsrPWGrTd-q-2ZHyPCGe4yimBf-qEbVoxoOMb8RtY55sI0IPuODw5krytGCSpsNfSu1ZbFqjo_LiMT9n2Yz46_vj9JY82FVCNQXqeVajWCGkNl_jSiF4RjBSq-Qk9JlUo5La1Bl8xVBTEZVryVmXOZNKXJsyo3dvoA9n3n7SNgVkgnp05aabSoFZFVaZ5pZxxCsFqbBPimgxsdCcmpLsZ5g8CEhNIEoTQklCYKJYGX23suRjqOf159RHLbXklU2uFA1581UTMb_LLM1vjuCC0pDbgtbKG54NrUSqOeJfB8I_UGVY_WU1pvsf8bbKRCD0yWIoGH4yjYNjVFK1PgvzABuTM-dt5l94xffgv03jyw-PHi8X80_ARu0geHCMzpU9hf94N9hl7SWk3g4Oj49POXSZhlwO2HBZ8EdfgNBkUXzQ
linkProvider Scholars Portal
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Fusion+inception+and+transformer+network+for+continuous+estimation+of+finger+kinematics+from+surface+electromyography&rft.jtitle=Frontiers+in+neurorobotics&rft.au=Lin%2C+Chuang&rft.au=Zhang%2C+Xiaobing&rft.date=2024-05-03&rft.issn=1662-5218&rft.eissn=1662-5218&rft.volume=18&rft.spage=1305605&rft_id=info:doi/10.3389%2Ffnbot.2024.1305605&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1662-5218&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1662-5218&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1662-5218&client=summon