Focus on temporal graph convolutional networks with unified attention for skeleton-based action recognition

Graph convolutional networks (GCN) have received more and more attention in skeleton-based action recognition. Many existing GCN models pay more attention to spatial information and ignore temporal information, but the completion of actions must be accompanied by changes in temporal information. Bes...

Full description

Saved in:
Bibliographic Details
Published inApplied intelligence (Dordrecht, Netherlands) Vol. 52; no. 5; pp. 5608 - 5616
Main Authors Gao, Bing-Kun, Dong, Le, Bi, Hong-Bo, Bi, Yun-Ze
Format Journal Article
LanguageEnglish
Published New York Springer US 01.03.2022
Subjects
Online AccessGet full text
ISSN0924-669X
1573-7497
DOI10.1007/s10489-021-02723-6

Cover

Abstract Graph convolutional networks (GCN) have received more and more attention in skeleton-based action recognition. Many existing GCN models pay more attention to spatial information and ignore temporal information, but the completion of actions must be accompanied by changes in temporal information. Besides, the channel, spatial, and temporal dimensions often contain redundant information. In this paper, we design a temporal graph convolutional network (FTGCN) module which can concentrate more temporal information and properly balance them for each action. In order to better integrate channel, spatial and temporal information, we propose a unified attention model of the channel, spatial and temporal (CSTA). A basic block containing these two novelties is called FTC-GCN. Extensive experiments on two large-scale datasets, compared with 17 methods on NTU-RGB+D and 8 methods on Kinetics-Skeleton, show that for skeleton-based human action recognition, our method achieves the best performance.
AbstractList Graph convolutional networks (GCN) have received more and more attention in skeleton-based action recognition. Many existing GCN models pay more attention to spatial information and ignore temporal information, but the completion of actions must be accompanied by changes in temporal information. Besides, the channel, spatial, and temporal dimensions often contain redundant information. In this paper, we design a temporal graph convolutional network (FTGCN) module which can concentrate more temporal information and properly balance them for each action. In order to better integrate channel, spatial and temporal information, we propose a unified attention model of the channel, spatial and temporal (CSTA). A basic block containing these two novelties is called FTC-GCN. Extensive experiments on two large-scale datasets, compared with 17 methods on NTU-RGB+D and 8 methods on Kinetics-Skeleton, show that for skeleton-based human action recognition, our method achieves the best performance.
Author Dong, Le
Bi, Hong-Bo
Gao, Bing-Kun
Bi, Yun-Ze
Author_xml – sequence: 1
  givenname: Bing-Kun
  surname: Gao
  fullname: Gao, Bing-Kun
  organization: NorthEast Petroleum University
– sequence: 2
  givenname: Le
  surname: Dong
  fullname: Dong, Le
  organization: NorthEast Petroleum University
– sequence: 3
  givenname: Hong-Bo
  orcidid: 0000-0003-2442-330X
  surname: Bi
  fullname: Bi, Hong-Bo
  email: bhbdq@126.com
  organization: NorthEast Petroleum University
– sequence: 4
  givenname: Yun-Ze
  surname: Bi
  fullname: Bi, Yun-Ze
  organization: NorthEast Petroleum University
BookMark eNp9kM1OAjEQgBuDiYC-gKe-wGp_drft0RBRExIvmnhrhm4LC0tL2iLx7d0FTx44NNPMzDcz-SZo5IO3CN1T8kAJEY-JklKqgjDaP8F4UV-hMa0EL0SpxAiNiWJlUdfq6wZNUtoQQjgndIy282AOCQePs93tQ4QOryLs19gE_x26Q26D73Pe5mOI24SPbV7jg29daxsMOVs_dGAXIk5b29kcfLGENBTNqRKtCSvfDv9bdO2gS_buL07R5_z5Y_ZaLN5f3mZPi8IwRXMhhOFOSAYMoHK0UdI0wFRVSVeDJRw4sSDFUjYEJCkbW3NnpXK8XBLBGPApYue5JoaUonV6H9sdxB9NiR506bMu3evSJ1267iH5DzJthuHsHKHtLqP8jKZ-j1_ZqDfhEHtt6RL1C4HEhNw
CitedBy_id crossref_primary_10_1007_s13735_024_00341_9
crossref_primary_10_1109_ACCESS_2024_3405182
crossref_primary_10_1007_s10489_022_04302_9
crossref_primary_10_1007_s00138_023_01386_2
crossref_primary_10_1007_s13735_023_00301_9
Cites_doi 10.1007/s11263-012-0550-7
10.1109/LSP.2017.2678539
10.1145/3065386
10.1109/MSP.2012.2235192
10.1016/j.cviu.2016.03.014
10.14445/22315381/IJETT-V48P253
10.1016/j.patrec.2012.07.005
10.1109/TCSVT.2008.2005594
10.1145/2964284.2967191
10.1145/3292500.3330982
10.1109/CVPR.2018.00558
10.1016/j.patcog.2017.02.030
10.1109/CVPR.2019.00371
10.1109/CVPR.2016.115
10.1109/CVPR.2017.486
10.1109/ICIP.2019.8802912
10.1109/CVPR.2017.387
10.1109/ICRA.2018.8460516
10.1109/CVPR.2014.82
10.1109/CVPR.2019.00810
10.1109/CVPRW.2017.207
10.1007/978-3-319-46487-9_50
10.1109/ICIP.2019.8802917
10.1109/VR.2013.6549371
10.1109/CVPR.2015.7298878
10.1609/aaai.v32i1.11782
10.1109/CVPR.2019.01230
10.1109/ICCV.2017.115
10.1109/CVPR.2015.7299176
10.1145/3219819.3219947
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021
DBID AAYXX
CITATION
DOI 10.1007/s10489-021-02723-6
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1573-7497
EndPage 5616
ExternalDocumentID 10_1007_s10489_021_02723_6
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
-~X
.86
.DC
.VR
06D
0R~
0VY
1N0
1SB
2.D
203
23M
28-
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
77K
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABIVO
ABJCF
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTAH
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACHSB
ACHXU
ACIWK
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACSNA
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DWQXO
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
L6V
LAK
LLZTM
M0C
M0N
M4Y
M7S
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PSYQQ
PT4
PT5
PTHSS
Q2X
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TEORI
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7X
Z7Z
Z81
Z83
Z88
Z8M
Z8N
Z8R
Z8T
Z8U
Z8W
Z92
ZMTXR
ZY4
~A9
~EX
77I
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
PQGLB
PUEGO
ID FETCH-LOGICAL-c291t-77c3f782a2aa5f1d98cda29558f6ae03a30ea87b8d0a804de63fe89f34b0722a3
IEDL.DBID AGYKE
ISSN 0924-669X
IngestDate Wed Oct 01 04:09:51 EDT 2025
Thu Apr 24 23:04:55 EDT 2025
Fri Feb 21 02:47:20 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Keywords Unified attention model
Skeleton-based action recognition
Graph convolutional networks
Temporal information
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c291t-77c3f782a2aa5f1d98cda29558f6ae03a30ea87b8d0a804de63fe89f34b0722a3
ORCID 0000-0003-2442-330X
PageCount 9
ParticipantIDs crossref_primary_10_1007_s10489_021_02723_6
crossref_citationtrail_10_1007_s10489_021_02723_6
springer_journals_10_1007_s10489_021_02723_6
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20220300
2022-03-00
PublicationDateYYYYMMDD 2022-03-01
PublicationDate_xml – month: 3
  year: 2022
  text: 20220300
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationSubtitle The International Journal of Research on Intelligent Systems for Real Life Complex Problems
PublicationTitle Applied intelligence (Dordrecht, Netherlands)
PublicationTitleAbbrev Appl Intell
PublicationYear 2022
Publisher Springer US
Publisher_xml – name: Springer US
References Zhang, Smith, Smith, Farooq (CR4) 2016; 149
Krizhevsky, Ilya, Geoffrey (CR8) 2017
CR19
Tarwani, Edem (CR11) 2017; 48
CR18
CR17
CR39
CR16
CR38
CR15
CR37
Ellis, Masood, Tappen, LaViola, Sukthankar (CR3) 2013; 101
CR14
CR36
CR12
CR34
Collobert, Weston, Bottou, Karlen, Kavukcuoglu, Kuksa (CR10) 2011; 12
CR33
CR32
Wang (CR1) 2013; 34
CR31
CR30
CR6
CR5
CR7
CR29
CR28
CR9
CR27
CR48
CR25
CR47
CR24
CR46
CR23
CR45
CR22
CR44
CR21
CR43
CR20
CR42
Li, Hou, Wang, Li (CR13) 2017; 24
CR41
CR40
Huang, Zhang, Rong, Huang (CR26) 2018; 31
Shuman, Narang, Frossard, Ortega, Vandergheynst (CR35) 2013; 30
Turaga, Chellappa, Subrahmanian, Udrea (CR2) 2008; 18
KM Tarwani (2723_CR11) 2017; 48
2723_CR9
C Ellis (2723_CR3) 2013; 101
2723_CR6
2723_CR21
2723_CR43
2723_CR7
2723_CR20
2723_CR42
2723_CR41
2723_CR5
2723_CR40
2723_CR29
X Wang (2723_CR1) 2013; 34
2723_CR28
2723_CR27
W Zhang (2723_CR4) 2016; 149
2723_CR48
2723_CR25
2723_CR47
2723_CR24
2723_CR46
2723_CR23
2723_CR45
2723_CR22
2723_CR44
A Krizhevsky (2723_CR8) 2017
DI Shuman (2723_CR35) 2013; 30
R Collobert (2723_CR10) 2011; 12
W Huang (2723_CR26) 2018; 31
2723_CR19
C Li (2723_CR13) 2017; 24
P Turaga (2723_CR2) 2008; 18
2723_CR32
2723_CR31
2723_CR30
2723_CR18
2723_CR17
2723_CR39
2723_CR16
2723_CR38
2723_CR15
2723_CR37
2723_CR14
2723_CR36
2723_CR12
2723_CR34
2723_CR33
References_xml – ident: CR45
– ident: CR22
– volume: 101
  start-page: 420
  issue: 3
  year: 2013
  end-page: 436
  ident: CR3
  article-title: Exploring the trade-off between accuracy and observational latency in action recognition
  publication-title: Int J Comput Vis
  doi: 10.1007/s11263-012-0550-7
– volume: 24
  start-page: 624
  issue: 5
  year: 2017
  end-page: 628
  ident: CR13
  article-title: Joint distance maps based action recognition with convolutional neural networks
  publication-title: IEEE Signal Process Lett
  doi: 10.1109/LSP.2017.2678539
– ident: CR18
– ident: CR43
– ident: CR47
– ident: CR14
– ident: CR39
– year: 2017
  ident: CR8
  publication-title: Imagenet classification with deep convolutional neural networks
  doi: 10.1145/3065386
– ident: CR16
– ident: CR37
– ident: CR12
– ident: CR30
– ident: CR33
– ident: CR6
– ident: CR29
– ident: CR40
– ident: CR25
– ident: CR27
– ident: CR42
– ident: CR23
– ident: CR21
– ident: CR46
– ident: CR19
– volume: 30
  start-page: 83
  issue: 3
  year: 2013
  end-page: 98
  ident: CR35
  article-title: The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains
  publication-title: IEEE Signal Process Magaz
  doi: 10.1109/MSP.2012.2235192
– ident: CR44
– ident: CR48
– volume: 149
  start-page: 32
  year: 2016
  end-page: 50
  ident: CR4
  article-title: Gender and gaze gesture recognition for human-computer interaction
  publication-title: Comput Vis Image Underst
  doi: 10.1016/j.cviu.2016.03.014
– volume: 48
  start-page: 301
  year: 2017
  end-page: 304
  ident: CR11
  article-title: Survey on recurrent neural network in natural language processing
  publication-title: Int J Eng Trends Technol
  doi: 10.14445/22315381/IJETT-V48P253
– ident: CR15
– ident: CR38
– ident: CR17
– ident: CR31
– ident: CR9
– ident: CR32
– ident: CR34
– ident: CR36
– volume: 34
  start-page: 3
  issue: 1
  year: 2013
  end-page: 19
  ident: CR1
  article-title: surveillance, Intelligent multi-camera video
  publication-title: A Rev Pattern Recognit Lett
  doi: 10.1016/j.patrec.2012.07.005
– ident: CR5
– ident: CR7
– volume: 31
  start-page: 4558
  year: 2018
  end-page: 4567
  ident: CR26
  article-title: Adaptive sampling towards fast graph representation learning
  publication-title: Adv Neural Inform Process Syst
– volume: 18
  start-page: 1473
  issue: 11
  year: 2008
  end-page: 1488
  ident: CR2
  article-title: Machine recognition of human activities: a survey
  publication-title: IEEE Trans Circ Syst Video Technol
  doi: 10.1109/TCSVT.2008.2005594
– ident: CR28
– ident: CR41
– ident: CR24
– ident: CR20
– volume: 12
  start-page: 2493
  issue: 1
  year: 2011
  end-page: 2537
  ident: CR10
  article-title: Natural language processing (almost) from scratch
  publication-title: J Mach Learn Res
– volume: 101
  start-page: 420
  issue: 3
  year: 2013
  ident: 2723_CR3
  publication-title: Int J Comput Vis
  doi: 10.1007/s11263-012-0550-7
– ident: 2723_CR24
– volume: 12
  start-page: 2493
  issue: 1
  year: 2011
  ident: 2723_CR10
  publication-title: J Mach Learn Res
– ident: 2723_CR9
– ident: 2723_CR12
  doi: 10.1145/2964284.2967191
– ident: 2723_CR22
  doi: 10.1145/3292500.3330982
– ident: 2723_CR20
– ident: 2723_CR30
– ident: 2723_CR29
  doi: 10.1109/CVPR.2018.00558
– ident: 2723_CR28
– ident: 2723_CR42
  doi: 10.1016/j.patcog.2017.02.030
– volume: 48
  start-page: 301
  year: 2017
  ident: 2723_CR11
  publication-title: Int J Eng Trends Technol
  doi: 10.14445/22315381/IJETT-V48P253
– volume: 18
  start-page: 1473
  issue: 11
  year: 2008
  ident: 2723_CR2
  publication-title: IEEE Trans Circ Syst Video Technol
  doi: 10.1109/TCSVT.2008.2005594
– ident: 2723_CR38
– ident: 2723_CR46
  doi: 10.1109/CVPR.2019.00371
– ident: 2723_CR32
– ident: 2723_CR17
– ident: 2723_CR19
– volume: 31
  start-page: 4558
  year: 2018
  ident: 2723_CR26
  publication-title: Adv Neural Inform Process Syst
– ident: 2723_CR36
  doi: 10.1109/CVPR.2016.115
– ident: 2723_CR48
– volume: 34
  start-page: 3
  issue: 1
  year: 2013
  ident: 2723_CR1
  publication-title: A Rev Pattern Recognit Lett
  doi: 10.1016/j.patrec.2012.07.005
– ident: 2723_CR25
– ident: 2723_CR15
  doi: 10.1109/CVPR.2017.486
– ident: 2723_CR31
  doi: 10.1109/ICIP.2019.8802912
– volume: 149
  start-page: 32
  year: 2016
  ident: 2723_CR4
  publication-title: Comput Vis Image Underst
  doi: 10.1016/j.cviu.2016.03.014
– ident: 2723_CR27
– ident: 2723_CR40
  doi: 10.1109/CVPR.2017.387
– ident: 2723_CR21
– ident: 2723_CR16
  doi: 10.1109/ICRA.2018.8460516
– ident: 2723_CR6
  doi: 10.1109/CVPR.2014.82
– ident: 2723_CR44
– volume: 24
  start-page: 624
  issue: 5
  year: 2017
  ident: 2723_CR13
  publication-title: IEEE Signal Process Lett
  doi: 10.1109/LSP.2017.2678539
– ident: 2723_CR47
  doi: 10.1109/CVPR.2019.00810
– ident: 2723_CR43
  doi: 10.1109/CVPRW.2017.207
– ident: 2723_CR39
  doi: 10.1007/978-3-319-46487-9_50
– ident: 2723_CR45
  doi: 10.1109/ICIP.2019.8802917
– ident: 2723_CR5
  doi: 10.1109/VR.2013.6549371
– ident: 2723_CR37
– ident: 2723_CR14
  doi: 10.1109/CVPR.2015.7298878
– ident: 2723_CR23
  doi: 10.1609/aaai.v32i1.11782
– ident: 2723_CR34
  doi: 10.1109/CVPR.2019.01230
– ident: 2723_CR33
– ident: 2723_CR41
  doi: 10.1109/ICCV.2017.115
– volume: 30
  start-page: 83
  issue: 3
  year: 2013
  ident: 2723_CR35
  publication-title: IEEE Signal Process Magaz
  doi: 10.1109/MSP.2012.2235192
– volume-title: Imagenet classification with deep convolutional neural networks
  year: 2017
  ident: 2723_CR8
  doi: 10.1145/3065386
– ident: 2723_CR7
  doi: 10.1109/CVPR.2015.7299176
– ident: 2723_CR18
  doi: 10.1145/3219819.3219947
SSID ssj0003301
Score 2.3584502
Snippet Graph convolutional networks (GCN) have received more and more attention in skeleton-based action recognition. Many existing GCN models pay more attention to...
SourceID crossref
springer
SourceType Enrichment Source
Index Database
Publisher
StartPage 5608
SubjectTerms Artificial Intelligence
Computer Science
Machines
Manufacturing
Mechanical Engineering
Processes
Title Focus on temporal graph convolutional networks with unified attention for skeleton-based action recognition
URI https://link.springer.com/article/10.1007/s10489-021-02723-6
Volume 52
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVLSH
  databaseName: SpringerLink Journals
  customDbUrl:
  mediaType: online
  eissn: 1573-7497
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0003301
  issn: 0924-669X
  databaseCode: AFBBN
  dateStart: 19970101
  isFulltext: true
  providerName: Library Specific Holdings
– providerCode: PRVPQU
  databaseName: ProQuest Technology Collection
  customDbUrl:
  eissn: 1573-7497
  dateEnd: 20241105
  omitProxy: true
  ssIdentifier: ssj0003301
  issn: 0924-669X
  databaseCode: 8FG
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/technologycollection1
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: SpringerLINK - Czech Republic Consortium
  customDbUrl:
  eissn: 1573-7497
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0003301
  issn: 0924-669X
  databaseCode: AGYKE
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: http://link.springer.com
  providerName: Springer Nature
– providerCode: PRVAVX
  databaseName: SpringerLink Journals (ICM)
  customDbUrl:
  eissn: 1573-7497
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0003301
  issn: 0924-669X
  databaseCode: U2A
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: http://www.springerlink.com/journals/
  providerName: Springer Nature
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PT4MwGP3itosX5884fyw9eFMWKFDguJnhoslOkswTKbRNzBZmBC7-9balTGfMkl2hENJ-7fdK3_cewF0QYUIFE3JbEtqWxz2i5lxkiZzmmHKhFF4U22JOZon3vPAXpiisbNnu7ZGkXql_Fbt5it6DHXXuiF2LdKCn9ba60Bs_vb1MNyuw3KNrpzy5t7AIiRamWOb_t2wnpO3TUJ1k4j4k7ec13JLlqK6yUf71R7lx3-8_hiODOtG4CZMTOODFKfRbRwdkJvgZLON1XpdoXSAjWbVCWtEaKXK6CVJ5rWi44yVSf3FRXbwLCWSRUurU3EkkgTAqlzKhKX9ilSflTV0_gTZ8pXVxDkk8fX2cWcaOwcpx5FQSh-eukICCYkp94bAozBnFke-HglBuu9S1OQ2DLGQ2DW2PceIKHkbC9TI7wJi6F9At1gW_BMS8LBORcuakSmKNZQ4XgTJf9zCTgNQegNOOSZobrXJlmbFKf1SWVW-msjdT3ZspGcD95pmPRqljZ-uHdpRSM2vLHc2v9mt-DYdYlUlortoNdKvPmt9K8FJlQxmr8WQyH5qYHUInweNvlgroZg
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwED1BGWDhG1E-PbBBpMRxnHisEFWB0qmVukVObEuoKEGk_f_4XKdQCVViTewMZ5_vLn73HsBdKiiXRhlblmRhwDTj6HMiMKUsqdQGGV4QbTHigwl7mSZT3xTWtGj39krSndS_mt0YwntohPeONA74NuwggRUy5k9ob3X-2grd6eTZyiLgXEx9q8zf31gPR-t3oS7E9A9h3-eGpLdczCPY0tUxHLS6C8S74QnM-nW5aEhdEU8s9UEc7zRBCLnfSvZZtUR4NwT_tZJF9W5sukmQT9MhHIlNV0kzs2EHVYQxmtmXrsuBrFBFdXUKk_7T-HEQeNGEoKQimttsuYyNDfuSSpmYSImsVJKKJMkMlzqMZRxqmaVFpkKZhUxpHhudCROzIkwplfEZdKq60udAFCsKI1A_UyIRmioibVKUSGdU2bQx7ELU2i4vPaM4Clt85D9cyGjv3No7d_bOeRfuV3M-l3waG0c_tEuSe99qNgy_-N_wW9gdjN-G-fB59HoJexQbGxy67Ao686-Fvrbpxry4cbvrG2n6y-s
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1dS8MwFA06QXzxW5yfefBNy9o0TZvHoZb5wfDBwd5K2iQgG92w3f83N03nBjLwtU37cHPTe9Kcew5CdzEnTGipzbYk8T2qKIM1xz1diIIIpUHhBdgWQzYY0ddxNF7p4rds9_ZIsulpAJWmsu7Npe6tNL5RoPqQAM4gSeixbbRDQSjBZPSI9JffYrNbt555ZpfhMcbHrm3m73esl6b1c1FbbtJDtO9wIu43E3uEtlR5jA5aDwbsluQJmqSzYlHhWYmdyNQUWw1qDHRyl1bmWtmwvSsM_13xovzSBnpi0Na0bEdsoCuuJqYEgaMwVDZz03Y84CXDaFaeolH6_Pk48JyBglcQHtQGORehNhBAECEiHUieFFIQHkWJZkL5oQh9JZI4T6QvEp9KxUKtEq5DmvsxISI8Q51yVqpzhCXNc83BS1OAKJrMA6VjsEunRBoI6XdR0MYuK5y6OJhcTLNfXWSId2bindl4Z6yL7pfPzBttjY2jH9opydw6qzYMv_jf8Fu0-_GUZu8vw7dLtEegx8ESza5Qp_5eqGuDPOr8xibXD1rs0Cc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Focus+on+temporal+graph+convolutional+networks+with+unified+attention+for+skeleton-based+action+recognition&rft.jtitle=Applied+intelligence+%28Dordrecht%2C+Netherlands%29&rft.au=Gao%2C+Bing-Kun&rft.au=Dong%2C+Le&rft.au=Bi%2C+Hong-Bo&rft.au=Bi%2C+Yun-Ze&rft.date=2022-03-01&rft.pub=Springer+US&rft.issn=0924-669X&rft.eissn=1573-7497&rft.volume=52&rft.issue=5&rft.spage=5608&rft.epage=5616&rft_id=info:doi/10.1007%2Fs10489-021-02723-6&rft.externalDocID=10_1007_s10489_021_02723_6
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0924-669X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0924-669X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0924-669X&client=summon