Human Action Recognition Research Based on Fusion TS-CNN and LSTM Networks

Human action recognition (HAR) technology is currently of significant interest. The traditional HAR methods depend on the time and space of the video stream generally. It requires a mass of training datasets and produces a long response time, failing to simultaneously meet the real-time interaction...

Full description

Saved in:
Bibliographic Details
Published inArabian journal for science and engineering (2011) Vol. 48; no. 2; pp. 2331 - 2345
Main Authors Zan, Hui, Zhao, Gang
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2023
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN2193-567X
1319-8025
2191-4281
DOI10.1007/s13369-022-07236-z

Cover

Abstract Human action recognition (HAR) technology is currently of significant interest. The traditional HAR methods depend on the time and space of the video stream generally. It requires a mass of training datasets and produces a long response time, failing to simultaneously meet the real-time interaction technical requirements-high accuracy, low delay, and low computational cost. For instance, the duration of a gymnastic action is as short as 0.2 s, from action capture to recognition, and then to the visualization of a three-dimensional character model. Only when the response time of the application system is short enough can it guide synchronous training and accurate evaluation. To reduce the dependence on the amount of video data and meet the HAR technical requirements, this paper proposes a three-stream long-short term memory (TS-CNN-LSTM) framework combining the CNN and LSTM networks. Firstly, human data of color, depth, and skeleton collected by Microsoft Kinect are used as input to reduce the sample sizes. Secondly, heterogeneous convolutional networks are established to reduce computing costs and elevate response time. The experiment results demonstrate the effectiveness of the proposed model on the NTU-RGB + D, reaching the best accuracy of 87.28% in the Cross-subject mode. Compared with the state-of-the-art methods, our method uses 75% of the training sample size, while the complexity of time and space only occupies 67.5% and 73.98% respectively. The response time of one set action recognition is improved by 0.90–1.61 s, which is especially valuable for timely action feedback. The proposed method provides an effective solution for real-time interactive applications which require timely human action recognition results and responses.
AbstractList Human action recognition (HAR) technology is currently of significant interest. The traditional HAR methods depend on the time and space of the video stream generally. It requires a mass of training datasets and produces a long response time, failing to simultaneously meet the real-time interaction technical requirements-high accuracy, low delay, and low computational cost. For instance, the duration of a gymnastic action is as short as 0.2 s, from action capture to recognition, and then to the visualization of a three-dimensional character model. Only when the response time of the application system is short enough can it guide synchronous training and accurate evaluation. To reduce the dependence on the amount of video data and meet the HAR technical requirements, this paper proposes a three-stream long-short term memory (TS-CNN-LSTM) framework combining the CNN and LSTM networks. Firstly, human data of color, depth, and skeleton collected by Microsoft Kinect are used as input to reduce the sample sizes. Secondly, heterogeneous convolutional networks are established to reduce computing costs and elevate response time. The experiment results demonstrate the effectiveness of the proposed model on the NTU-RGB + D, reaching the best accuracy of 87.28% in the Cross-subject mode. Compared with the state-of-the-art methods, our method uses 75% of the training sample size, while the complexity of time and space only occupies 67.5% and 73.98% respectively. The response time of one set action recognition is improved by 0.90–1.61 s, which is especially valuable for timely action feedback. The proposed method provides an effective solution for real-time interactive applications which require timely human action recognition results and responses.
Human action recognition (HAR) technology is currently of significant interest. The traditional HAR methods depend on the time and space of the video stream generally. It requires a mass of training datasets and produces a long response time, failing to simultaneously meet the real-time interaction technical requirements-high accuracy, low delay, and low computational cost. For instance, the duration of a gymnastic action is as short as 0.2 s, from action capture to recognition, and then to the visualization of a three-dimensional character model. Only when the response time of the application system is short enough can it guide synchronous training and accurate evaluation. To reduce the dependence on the amount of video data and meet the HAR technical requirements, this paper proposes a three-stream long-short term memory (TS-CNN-LSTM) framework combining the CNN and LSTM networks. Firstly, human data of color, depth, and skeleton collected by Microsoft Kinect are used as input to reduce the sample sizes. Secondly, heterogeneous convolutional networks are established to reduce computing costs and elevate response time. The experiment results demonstrate the effectiveness of the proposed model on the NTU-RGB + D, reaching the best accuracy of 87.28% in the Cross-subject mode. Compared with the state-of-the-art methods, our method uses 75% of the training sample size, while the complexity of time and space only occupies 67.5% and 73.98% respectively. The response time of one set action recognition is improved by 0.90–1.61 s, which is especially valuable for timely action feedback. The proposed method provides an effective solution for real-time interactive applications which require timely human action recognition results and responses.
Author Zhao, Gang
Zan, Hui
Author_xml – sequence: 1
  givenname: Hui
  surname: Zan
  fullname: Zan, Hui
  organization: Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Faculty of Artificial Intelligence in Education, Central China Normal University
– sequence: 2
  givenname: Gang
  surname: Zhao
  fullname: Zhao, Gang
  email: zhaogang@ccnu.edu.cn
  organization: Faculty of Artificial Intelligence in Education, Central China Normal University
BookMark eNp9kE1LAzEQQINUsNb-AU8LnqNJZr9yrMVapVawFbyFZDepq222JruI_fXudguCh55mmJk3M7xz1LOl1QhdUnJNCUluPAWIOSaMYZIwiPHuBPUZ5RSHLKW9fQ44ipO3MzT0vlAkTIFHlEIfPU7rjbTBKKuK0gYvOitXtjjkXkuXvQe30us8aCqT2reN5QKP5_NA2jyYLZZPwVxX36X79Bfo1Mi118NDHKDXyd1yPMWz5_uH8WiGM6C8wppTAmAiYIlkucqVSQ0kSkVEqdRICHmspcohziLOFU-MUSB5qGlEYs6AwQBddXu3rvyqta_ER1k725wULEnCKCaEt1Osm8pc6b3TRmxdsZHuR1AiWm2i0yYabWKvTewaKP0HZUUlWx2Vk8X6OAod6ps7dqXd31dHqF85JoMQ
CitedBy_id crossref_primary_10_1007_s10462_024_10934_9
crossref_primary_10_3390_s24082595
crossref_primary_10_3390_app14146335
crossref_primary_10_1007_s42452_024_05774_9
Cites_doi 10.3969/j.issn.1001-8972.2020.01.023
10.1016/j.neucom.2020.07.068
10.16652/j.issn.1004-373x.2020.04.035(InChinese)
10.1038/scientificamerican0675-76
10.1002/14651858.CD001941.pub3
10.1109/ACCESS.2018.2869751
10.1109/TPAMI.2016.2599174
10.3969/j.issn.1009-6833.2019.11.027
10.1007/978-3-319-46487-9_50
10.1109/TPAMI.2020.2976014
10.1145/1922649.1922653
10.13757/j.cnki.cn34-1328/n.2020.01.013
10.1109/TIP.2018.2812099
10.1016/j.eswa.2021.114693
10.1609/aaai.v33i01.33018989
10.1109/TIP.2019.2913544
10.1109/tpami.2019.2916873
10.3390/s17061261
10.1109/TASKP.2019.2959251
10.1109/TIP.2019.2907048
10.3390/s20143894
10.1088/1742-6596/1187/4/042027
10.1109/CVPR.2016.115
10.13274/j.cnki.hdzj.2019.11.004
10.1109/LSP.2017.2678539
10.1049/el.2020.2148
10.3390/s20123499
10.1016/j.asoc.2021.107728
10.1007/s42235-021-00083-y
10.1007/s11263-016-0917-2
10.1109/TPAMI.2017.2771306
10.3969/j.issn.1001-182X.2020.03.005
10.3390/s20113305
10.16208/j.issn1000-7024.2019.09.038
10.1109/CVPR.2016.213
10.1371/journal.pone.0212320
10.19734/j.issn.1001-3695.2018.05.0499
10.1109/SIBGRAPI.2019.00011
10.1109/TIP.2020.2965299
10.1109/tpami.2018.2880750
10.1109/ICMEW.2017.8026287
10.3390/s20071825
10.1109/TPAMI.2008.284
10.3390/s20102886
10.3390/s19051005
10.3390/s20113126
10.3390/s20082226
10.1109/ICAIIC48513.2020.9065078
10.1109/CVPR.2017.391
10.1109/UEMCON.2017.8249013
10.1109/TMM.2018.2802648
10.1109/CVPR.2017.387
10.1109/ICPR.2018.8545247
10.1109/CVPR.2005.177
10.1109/ICCV.2017.233
10.1109/AUTEEE.2018.8720753
10.1109/WCSP.2018.8555945
ContentType Journal Article
Copyright King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: King Fahd University of Petroleum & Minerals 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
DOI 10.1007/s13369-022-07236-z
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2191-4281
EndPage 2345
ExternalDocumentID 10_1007_s13369_022_07236_z
GrantInformation_xml – fundername: Zhejiang Education Science Planning Project Zhejiang Province, China.
  grantid: 2021SCG309
– fundername: Research on Automatic Segmentation and Recognition of Teaching Scene with the Characteristics of Teaching Behavior of National Natural Science Foundation of China
  grantid: 61977034
– fundername: open fund of Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province
  grantid: NO.jykf20057
GroupedDBID -EM
0R~
203
2KG
406
AAAVM
AACDK
AAHNG
AAIAL
AAJBT
AANZL
AARHV
AASML
AATNV
AATVU
AAUYE
AAYTO
AAYZH
ABAKF
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABJNI
ABJOX
ABKCH
ABMQK
ABQBU
ABSXP
ABTEG
ABTKH
ABTMW
ABXPI
ACAOD
ACBXY
ACDTI
ACHSB
ACMDZ
ACMLO
ACOKC
ACPIV
ACUHS
ACZOJ
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFQL
AEJRE
AEMSY
AEOHA
AESKC
AEVLU
AEXYK
AFBBN
AFLOW
AFQWF
AGAYW
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AHAVH
AHBYD
AHSBF
AIAKS
AIGIU
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALFXC
ALMA_UNASSIGNED_HOLDINGS
AMXSW
AMYLF
AOCGG
AXYYD
BGNMA
CSCUP
DDRTE
DNIVK
DPUIP
EBLON
EBS
EIOEI
EJD
ESX
FERAY
FIGPU
FINBP
FNLPD
FSGXE
GGCAI
GQ6
GQ7
H13
HG6
I-F
IKXTQ
IWAJR
J-C
JBSCW
JZLTJ
L8X
LLZTM
M4Y
MK~
NPVJJ
NQJWS
NU0
O9J
PT4
ROL
RSV
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
TSG
TUS
UOJIU
UTJUX
UZXMN
VFIZW
Z5O
Z7R
Z7V
Z7X
Z7Y
Z7Z
Z81
Z83
Z85
Z88
ZMTXR
~8M
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
AEZWR
AFDZB
AFHIU
AFOHR
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
06D
0VY
23M
29~
2KM
30V
408
5GY
96X
AAJKR
AARTL
AAYIU
AAYQN
AAZMS
ABTHY
ACGFS
ACKNC
ADHHG
ADHIR
AEGNC
AEJHL
AENEX
AEPYU
AETCA
AFWTZ
AFZKB
AGDGC
AGWZB
AGYKE
AHYZX
AIIXL
AMKLP
AMYQR
ANMIH
AYJHY
ESBYG
FFXSO
FRRFC
FYJPI
GGRSB
GJIRD
GX1
HMJXF
HRMNR
HZ~
I0C
IXD
J9A
KOV
O93
OVT
P9P
R9I
RLLFE
S27
S3B
SEG
SHX
T13
U2A
UG4
VC2
W48
WK8
~A9
ID FETCH-LOGICAL-c319t-e91033f5327a2dbdbf8f37bb50bb8fa3496eabd36c599b97ffb3a94e150692323
ISSN 2193-567X
1319-8025
IngestDate Mon Jun 30 09:00:05 EDT 2025
Thu Apr 24 23:03:24 EDT 2025
Wed Oct 01 02:18:42 EDT 2025
Fri Feb 21 02:45:04 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Keywords Human action recognition
TS-LSTM
Multistream network
CNN-LSTM
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c319t-e91033f5327a2dbdbf8f37bb50bb8fa3496eabd36c599b97ffb3a94e150692323
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2774560092
PQPubID 2044268
PageCount 15
ParticipantIDs proquest_journals_2774560092
crossref_primary_10_1007_s13369_022_07236_z
crossref_citationtrail_10_1007_s13369_022_07236_z
springer_journals_10_1007_s13369_022_07236_z
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-02-01
PublicationDateYYYYMMDD 2023-02-01
PublicationDate_xml – month: 02
  year: 2023
  text: 2023-02-01
  day: 01
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationTitle Arabian journal for science and engineering (2011)
PublicationTitleAbbrev Arab J Sci Eng
PublicationYear 2023
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Shahroudy, Liu, Ng (CR50) 2016; 1
Gao, Zhang, Teng (CR37) 2021
Luvizon, Picard, Tabia (CR17) 2020; 8
Hadfield, Lebeda, Bowden (CR42) 2017; 121
CR38
Lee, Ahn (CR2) 2020; 20
Zhu, Wu, Cui, Wang, Hang, Hua, Snoussi (CR57) 2020; 414
Zhu, Zhang, Chen (CR7) 2020; 1
CR36
Chen, Du, He (CR40) 2021; 18
CR32
Ke, Bennamoun, An (CR22) 2018; 27
Wen, Gao, Fu (CR26) 2019; 33
Mou, Zhou, Zhao (CR35) 2021; 173
Zhu, Qianyu, Cui (CR39) 2020; 414
Yan, Chong-Chong, Han (CR33) 2019; 40
Pham, Salmane, Khoudour (CR8) 2020; 20
Zhang, Zhang, Zhong (CR5) 2020
CR6
Wang, Yu, Lai (CR31) 2019; 28
Wang, Zhao, Liu (CR61) 2018; 6
Simonyan, Zisserman (CR58) 2014; 1
CR49
Kim, Kim, Hernandez Montoya (CR41) 2019; 14
CR47
Xue-Chao (CR24) 2019; 43
Ren, Zhang, Qiao (CR63) 2020
CR43
Chen, Kong, Sun (CR19) 2020; 20
Kim, Park, Park (CR10) 2020; 20
Johansson (CR3) 1975; 232
Donahue, Hendricks, Rohrbach (CR29) 2017; 39
Dong, Fang, Xudong (CR4) 2020; 33
Dhiman, Vishwakarma (CR9) 2020; 29
Wang, Song, Li (CR30) 2020; 20
Ma, Wang, Mao (CR15) 2019; 11
Min, Lan (CR28) 2020; 26
Panareda, Iqbal, Gall (CR14) 2020; 42
Aggarwal, Ryoo (CR1) 2011; 43
Meng, Liu, Liang (CR20) 2019; 28
Ali, Shah (CR11) 2010; 32
CR59
Chan, Tian, Wu (CR45) 2020; 20
CR12
CR56
CR55
CR54
CR53
Yasin, Hussain, Weber (CR18) 2020; 20
Caetano, Bremond, Schwartz (CR25) 2019; 1
Feichtenhofer, Pinz, Zisserman (CR48) 2016; 1
Xi-Ting, Sheng, Yao (CR13) 2020; 41
Penghua, Min, Hua (CR16) 2020; 43
Liu, Shahroudy, Xu, Wang (CR51) 2016; 1
Nie, Wang, Wang (CR46) 2019; 28
Li, Wang, Wang, Hou, Li (CR52) 2017; 1
Liu, Shahroudy, Xu (CR27) 2018; 40
Sun, Guo, Li (CR21) 2019; 1187
Li, Hou, Wang, Li (CR60) 2017; 24
Kim, Kim, Kwak (CR23) 2017; 17
Yan, Yu, Han (CR34) 2019; 40
Liu, Shahroudy, Perez, Wang, Duan, Kot (CR44) 2019; 42
CR62
HH Pham (7236_CR8) 2020; 20
C Dhiman (7236_CR9) 2020; 29
7236_CR62
J Liu (7236_CR27) 2018; 40
Y Zhu (7236_CR7) 2020; 1
A Zhu (7236_CR57) 2020; 414
S Hadfield (7236_CR42) 2017; 121
Z Yan (7236_CR34) 2019; 40
C Feichtenhofer (7236_CR48) 2016; 1
J Wang (7236_CR31) 2019; 28
Z Sun (7236_CR21) 2019; 1187
J Liu (7236_CR51) 2016; 1
7236_CR32
S Xi-Ting (7236_CR13) 2020; 41
L Mou (7236_CR35) 2021; 173
J Chen (7236_CR19) 2020; 20
F Meng (7236_CR20) 2019; 28
YH Wen (7236_CR26) 2019; 33
W Gao (7236_CR37) 2021
B Xue-Chao (7236_CR24) 2019; 43
BP Panareda (7236_CR14) 2020; 42
S Min (7236_CR28) 2020; 26
J Lee (7236_CR2) 2020; 20
Q Ke (7236_CR22) 2018; 27
C Caetano (7236_CR25) 2019; 1
A Zhu (7236_CR39) 2020; 414
Z Yan (7236_CR33) 2019; 40
7236_CR38
C Li (7236_CR52) 2017; 1
7236_CR36
D Luvizon (7236_CR17) 2020; 8
J Donahue (7236_CR29) 2017; 39
7236_CR43
Q Nie (7236_CR46) 2019; 28
A Shahroudy (7236_CR50) 2016; 1
L Wang (7236_CR61) 2018; 6
H Yasin (7236_CR18) 2020; 20
HB Zhang (7236_CR5) 2020
H Wang (7236_CR30) 2020; 20
J Liu (7236_CR44) 2019; 42
H Kim (7236_CR10) 2020; 20
7236_CR49
C Ma (7236_CR15) 2019; 11
7236_CR47
7236_CR6
GE Penghua (7236_CR16) 2020; 43
7236_CR53
7236_CR54
Z Ren (7236_CR63) 2020
W Chan (7236_CR45) 2020; 20
JK Aggarwal (7236_CR1) 2011; 43
C Li (7236_CR60) 2017; 24
G Johansson (7236_CR3) 1975; 232
C Chen (7236_CR40) 2021; 18
K Simonyan (7236_CR58) 2014; 1
T Kim (7236_CR41) 2019; 14
7236_CR59
N Dong (7236_CR4) 2020; 33
D Kim (7236_CR23) 2017; 17
S Ali (7236_CR11) 2010; 32
7236_CR55
7236_CR12
7236_CR56
References_xml – volume: 1
  start-page: 68
  year: 2020
  end-page: 70
  ident: CR7
  article-title: An intelligent system based on human action control
  publication-title: China Sci. Technol. Inf.
  doi: 10.3969/j.issn.1001-8972.2020.01.023
– volume: 414
  start-page: 90
  issue: 5
  year: 2020
  end-page: 100
  ident: CR39
  article-title: Exploring a rich spatial-temporal dependent relational model for Skeleton-based action recognition by bidirectional LSTM-CNN
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.07.068
– volume: 43
  start-page: 137
  issue: 4
  year: 2020
  end-page: 141
  ident: CR16
  article-title: Human action recognition based on two-stream independently recurrent neural network
  publication-title: Mod. Electron. Tech.
  doi: 10.16652/j.issn.1004-373x.2020.04.035(InChinese)
– ident: CR49
– volume: 414
  start-page: 90
  year: 2020
  end-page: 100
  ident: CR57
  article-title: Exploring a rich spatial-temporal dependent relational model for skeleton-based action recognition by bidirectional LSTM-CNN
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.07.068
– volume: 232
  start-page: 76
  issue: 6
  year: 1975
  end-page: 88
  ident: CR3
  article-title: Visual motion perception
  publication-title: Sci. Am.
  doi: 10.1038/scientificamerican0675-76
– ident: CR12
– volume: 40
  start-page: 2620
  issue: 009
  year: 2019
  end-page: 2624
  ident: CR34
  article-title: Short term traffic flow prediction method based on CNN+LSTM
  publication-title: Comput. Eng. Des.
– volume: 1
  start-page: 568
  year: 2014
  end-page: 576
  ident: CR58
  article-title: ‘Two-stream convolutional networks for action recognition in videos’, Advances in Neural Information Processing Systems (NIPS)
  publication-title: Montréal, Canada
  doi: 10.1002/14651858.CD001941.pub3
– volume: 6
  start-page: 50788
  year: 2018
  end-page: 50800
  ident: CR61
  article-title: Skeleton feature fusion based on multistream lstm for action recognition
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2869751
– volume: 39
  start-page: 677
  issue: 4
  year: 2017
  end-page: 691
  ident: CR29
  article-title: Long-term recurrent convolutional networks for visual recognition and description
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2599174
– ident: CR54
– volume: 11
  start-page: 47
  year: 2019
  end-page: 50
  ident: CR15
  article-title: Action recognition based on spatiotemporal dual flow fusion network and am softmax
  publication-title: Netw. Secur. Technol. Appl.
  doi: 10.3969/j.issn.1009-6833.2019.11.027
– volume: 1
  start-page: 816
  year: 2016
  end-page: 833
  ident: CR51
  article-title: Spatio-temporal lstm with trust gates for 3d human action recognition
  publication-title: European Conference on Computer Vision (ECCV)
  doi: 10.1007/978-3-319-46487-9_50
– volume: 8
  start-page: 27522764
  issue: 43
  year: 2020
  ident: CR17
  article-title: Multi-task deep learning for real-time 3D human pose estimation and action recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.2976014
– volume: 43
  start-page: 1
  issue: 3
  year: 2011
  end-page: 43
  ident: CR1
  article-title: Human activity analysis: A review
  publication-title: ACM Comput. Surv.
  doi: 10.1145/1922649.1922653
– volume: 26
  start-page: 73
  issue: 1
  year: 2020
  end-page: 76
  ident: CR28
  article-title: Human movements recognition based on LSTM network model and front action view
  publication-title: J. Anqing Normal Univ. (Nat. Sci. Ed.)
  doi: 10.13757/j.cnki.cn34-1328/n.2020.01.013
– volume: 27
  start-page: 2842
  issue: 6
  year: 2018
  end-page: 2855
  ident: CR22
  article-title: Learning clip representations for Skeleton-based 3D action recognition
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2812099
– volume: 173
  start-page: 1193
  issue: 12
  year: 2021
  ident: CR35
  article-title: Driver stress detection via multimodal fusion using attention-based CNN-LSTM
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2021.114693
– volume: 33
  start-page: 8989
  year: 2019
  end-page: 8996
  ident: CR26
  article-title: Graph CNNs with motif and variable temporal block for Skeleton-based action recognition
  publication-title: Proceedings of the AAAI Conference on Artificial Intelligence
  doi: 10.1609/aaai.v33i01.33018989
– volume: 28
  start-page: 5281
  issue: 11
  year: 2019
  end-page: 5295
  ident: CR20
  article-title: Sample fusion network: an end-to-end data augmentation network for Skeleton-based human action recognition
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2019.2913544
– volume: 42
  start-page: 2684
  issue: 10
  year: 2019
  end-page: 2701
  ident: CR44
  article-title: NTU-RGB+D 120: a large-scale benchmark for 3D human activity understanding
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI)
  doi: 10.1109/tpami.2019.2916873
– ident: CR32
– ident: CR36
– volume: 17
  start-page: 1261
  issue: 6
  year: 2017
  ident: CR23
  article-title: Classification of K-Pop dance movements based on skeleton information obtained by a kinect sensor
  publication-title: Sens. (Basel).
  doi: 10.3390/s17061261
– volume: 28
  start-page: 581
  year: 2019
  end-page: 591
  ident: CR31
  article-title: Tree-structured regional CNN-LSTM model for dimensional sentiment analysis
  publication-title: IEEE/ACM Trans. Audio Speech Language Process.
  doi: 10.1109/TASKP.2019.2959251
– volume: 28
  start-page: 3959
  issue: 8
  year: 2019
  end-page: 3972
  ident: CR46
  article-title: View-Invariant Human Action Recognition Based on a 3D Bio-Constrained Skeleton Model
  publication-title: IEEE Trans Image Process.
  doi: 10.1109/TIP.2019.2907048
– volume: 20
  start-page: 1
  issue: 14
  year: 2020
  ident: CR10
  article-title: Enhanced action recognition using multiple stream deep learning with optical flow and weighted sum
  publication-title: Sens. (Basel).
  doi: 10.3390/s20143894
– volume: 1187
  start-page: 42027
  year: 2019
  ident: CR21
  article-title: Cooperative warp of two discriminative features for Skeleton based action recognition
  publication-title: J. Phys.: Conf. Ser.
  doi: 10.1088/1742-6596/1187/4/042027
– volume: 1
  start-page: 1010
  year: 2016
  end-page: 1019
  ident: CR50
  article-title: NTU RGB+D: a large scale dataset for 3D human activity analysis
  publication-title: IEEE Comput. Soc.
  doi: 10.1109/CVPR.2016.115
– volume: 43
  start-page: 16
  issue: 11
  year: 2019
  end-page: 19
  ident: CR24
  article-title: Dance-specific action recognition based on spatial skeleton sequence diagram
  publication-title: Inf. Technol.
  doi: 10.13274/j.cnki.hdzj.2019.11.004
– volume: 24
  start-page: 624
  issue: 5
  year: 2017
  end-page: 628
  ident: CR60
  article-title: Joint distance maps based action recognition with convolutional neural networks
  publication-title: IEEE Signal Process. Lett.
  doi: 10.1109/LSP.2017.2678539
– year: 2020
  ident: CR63
  article-title: Joint learning of convolution neural networks for RGB-D-based human action recognition
  publication-title: Electron. Lett.
  doi: 10.1049/el.2020.2148
– ident: CR43
– ident: CR47
– volume: 20
  start-page: 3499
  issue: 12
  year: 2020
  ident: CR45
  article-title: GAS-GCN: gated action-specific graph convolutional networks for skeleton-based action recognition
  publication-title: Sensors (Basel)
  doi: 10.3390/s20123499
– ident: CR53
– year: 2021
  ident: CR37
  article-title: DanHAR: dual attention network for multimodal human activity recognition using wearable sensors
  publication-title: Appl. Soft Comput.
  doi: 10.1016/j.asoc.2021.107728
– ident: CR6
– volume: 18
  start-page: 1059
  year: 2021
  end-page: 1072
  ident: CR40
  article-title: A novel gait pattern recognition method based on LSTM-CNN for lower limb exoskeleton
  publication-title: J. Bionic Eng.
  doi: 10.1007/s42235-021-00083-y
– volume: 121
  start-page: 95
  issue: 1
  year: 2017
  end-page: 110
  ident: CR42
  article-title: Hollywood 3D: What are the best 3D features for action recognition
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-016-0917-2
– ident: CR56
– volume: 40
  start-page: 3007
  issue: 12
  year: 2018
  end-page: 3021
  ident: CR27
  article-title: Skeleton-based action recognition using spatio-temporal LSTM network with trust gates
  publication-title: IEEE Trans. Pattern. Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2771306
– volume: 33
  start-page: 12
  issue: 3
  year: 2020
  end-page: 14
  ident: CR4
  article-title: A human activity recognition method based on DBMM
  publication-title: Ind. Control Comput.
  doi: 10.3969/j.issn.1001-182X.2020.03.005
– volume: 20
  start-page: 3305
  issue: 11
  year: 2020
  ident: CR30
  article-title: A hybrid network for large-scale action recognition from RGB and depth modalities
  publication-title: Sensors (Basel).
  doi: 10.3390/s20113305
– volume: 40
  start-page: 1
  issue: 09
  year: 2019
  ident: CR33
  article-title: Short-term traffic flow forecasting method based on CNN+LSTM
  publication-title: Comput. Eng. Des.
  doi: 10.16208/j.issn1000-7024.2019.09.038
– volume: 1
  start-page: 1933
  year: 2016
  end-page: 1941
  ident: CR48
  article-title: Convolutional two-stream network fusion for video action recognition
  publication-title: Comput. Vis. Pattern Recognit.
  doi: 10.1109/CVPR.2016.213
– volume: 14
  start-page: e212320
  issue: 2
  year: 2019
  ident: CR41
  article-title: Forecasting stock prices with a feature fusion LSTM-CNN model using different representations of the same data
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0212320
– ident: CR38
– volume: 41
  start-page: 304
  issue: 4
  year: 2020
  end-page: 307
  ident: CR13
  article-title: Human action recognition method based on deep learning
  publication-title: Comput. Eng. Des.
  doi: 10.19734/j.issn.1001-3695.2018.05.0499
– volume: 1
  start-page: 16
  year: 2019
  end-page: 23
  ident: CR25
  article-title: Skeleton image representation for 3D action recognition based on tree structure and reference joints
  publication-title: IEEE
  doi: 10.1109/SIBGRAPI.2019.00011
– volume: 29
  start-page: 3835
  year: 2020
  end-page: 3844
  ident: CR9
  article-title: View-invariant deep architecture for human action recognition using two-stream motion and shape temporal dynamics
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2965299
– ident: CR55
– volume: 42
  start-page: 413
  issue: 2
  year: 2020
  end-page: 429
  ident: CR14
  article-title: Open set domain adaptation for image and action recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/tpami.2018.2880750
– volume: 1
  start-page: 585
  year: 2017
  end-page: 590
  ident: CR52
  article-title: Skeleton-based action recognition using lstm and CNN
  publication-title: IEEE International Conference on Multimedia & Expo Workshops
  doi: 10.1109/ICMEW.2017.8026287
– ident: CR59
– volume: 20
  start-page: 1825
  issue: 7
  year: 2020
  ident: CR8
  article-title: A unified deep framework for joint 3D pose estimation and action recognition from a single RGB camera
  publication-title: Sensors (Basel)
  doi: 10.3390/s20071825
– volume: 32
  start-page: 288
  issue: 2
  year: 2010
  end-page: 303
  ident: CR11
  article-title: Human action recognition in videos using kinematic features and multiple instance learning
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2008.284
– volume: 20
  start-page: 2886
  issue: 10
  year: 2020
  ident: CR2
  article-title: Real-time human action recognition with a low-cost RGB camera and mobile robot platform
  publication-title: Sens. (Basel, Switzerland).
  doi: 10.3390/s20102886
– year: 2020
  ident: CR5
  article-title: A comprehensive survey of vision-based human action recognition methods
  publication-title: Sensors (Basel).
  doi: 10.3390/s19051005
– ident: CR62
– volume: 20
  start-page: 3126
  issue: 11
  year: 2020
  ident: CR19
  article-title: Spatiotemporal interaction residual networks with pseudo3D for video action recognition
  publication-title: Sensors (Basel).
  doi: 10.3390/s20113126
– volume: 20
  start-page: 2226
  issue: 8
  year: 2020
  ident: CR18
  article-title: Keys for action: an efficient keyframe-based approach for 3D action recognition using a deep neural network
  publication-title: Sensors (Basel).
  doi: 10.3390/s20082226
– volume: 173
  start-page: 1193
  issue: 12
  year: 2021
  ident: 7236_CR35
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2021.114693
– volume: 1
  start-page: 816
  year: 2016
  ident: 7236_CR51
  publication-title: European Conference on Computer Vision (ECCV)
  doi: 10.1007/978-3-319-46487-9_50
– year: 2020
  ident: 7236_CR5
  publication-title: Sensors (Basel).
  doi: 10.3390/s19051005
– volume: 1
  start-page: 68
  year: 2020
  ident: 7236_CR7
  publication-title: China Sci. Technol. Inf.
  doi: 10.3969/j.issn.1001-8972.2020.01.023
– volume: 18
  start-page: 1059
  year: 2021
  ident: 7236_CR40
  publication-title: J. Bionic Eng.
  doi: 10.1007/s42235-021-00083-y
– volume: 28
  start-page: 5281
  issue: 11
  year: 2019
  ident: 7236_CR20
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2019.2913544
– volume: 28
  start-page: 3959
  issue: 8
  year: 2019
  ident: 7236_CR46
  publication-title: IEEE Trans Image Process.
  doi: 10.1109/TIP.2019.2907048
– volume: 1
  start-page: 568
  year: 2014
  ident: 7236_CR58
  publication-title: Montréal, Canada
  doi: 10.1002/14651858.CD001941.pub3
– volume: 40
  start-page: 2620
  issue: 009
  year: 2019
  ident: 7236_CR34
  publication-title: Comput. Eng. Des.
– volume: 41
  start-page: 304
  issue: 4
  year: 2020
  ident: 7236_CR13
  publication-title: Comput. Eng. Des.
  doi: 10.19734/j.issn.1001-3695.2018.05.0499
– volume: 6
  start-page: 50788
  year: 2018
  ident: 7236_CR61
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2869751
– ident: 7236_CR38
  doi: 10.1109/ICAIIC48513.2020.9065078
– volume: 1
  start-page: 1933
  year: 2016
  ident: 7236_CR48
  publication-title: Comput. Vis. Pattern Recognit.
  doi: 10.1109/CVPR.2016.213
– ident: 7236_CR53
  doi: 10.1109/CVPR.2017.391
– volume: 20
  start-page: 3499
  issue: 12
  year: 2020
  ident: 7236_CR45
  publication-title: Sensors (Basel)
  doi: 10.3390/s20123499
– volume: 8
  start-page: 27522764
  issue: 43
  year: 2020
  ident: 7236_CR17
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.2976014
– ident: 7236_CR32
  doi: 10.1109/UEMCON.2017.8249013
– volume: 43
  start-page: 1
  issue: 3
  year: 2011
  ident: 7236_CR1
  publication-title: ACM Comput. Surv.
  doi: 10.1145/1922649.1922653
– volume: 11
  start-page: 47
  year: 2019
  ident: 7236_CR15
  publication-title: Netw. Secur. Technol. Appl.
  doi: 10.3969/j.issn.1009-6833.2019.11.027
– volume: 27
  start-page: 2842
  issue: 6
  year: 2018
  ident: 7236_CR22
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2812099
– volume: 33
  start-page: 8989
  year: 2019
  ident: 7236_CR26
  publication-title: Proceedings of the AAAI Conference on Artificial Intelligence
  doi: 10.1609/aaai.v33i01.33018989
– volume: 43
  start-page: 137
  issue: 4
  year: 2020
  ident: 7236_CR16
  publication-title: Mod. Electron. Tech.
  doi: 10.16652/j.issn.1004-373x.2020.04.035(InChinese)
– volume: 26
  start-page: 73
  issue: 1
  year: 2020
  ident: 7236_CR28
  publication-title: J. Anqing Normal Univ. (Nat. Sci. Ed.)
  doi: 10.13757/j.cnki.cn34-1328/n.2020.01.013
– volume: 20
  start-page: 3305
  issue: 11
  year: 2020
  ident: 7236_CR30
  publication-title: Sensors (Basel).
  doi: 10.3390/s20113305
– volume: 20
  start-page: 1
  issue: 14
  year: 2020
  ident: 7236_CR10
  publication-title: Sens. (Basel).
  doi: 10.3390/s20143894
– volume: 232
  start-page: 76
  issue: 6
  year: 1975
  ident: 7236_CR3
  publication-title: Sci. Am.
  doi: 10.1038/scientificamerican0675-76
– ident: 7236_CR56
  doi: 10.1109/TMM.2018.2802648
– volume: 20
  start-page: 3126
  issue: 11
  year: 2020
  ident: 7236_CR19
  publication-title: Sensors (Basel).
  doi: 10.3390/s20113126
– volume: 121
  start-page: 95
  issue: 1
  year: 2017
  ident: 7236_CR42
  publication-title: Int. J. Comput. Vis.
  doi: 10.1007/s11263-016-0917-2
– volume: 414
  start-page: 90
  issue: 5
  year: 2020
  ident: 7236_CR39
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.07.068
– volume: 32
  start-page: 288
  issue: 2
  year: 2010
  ident: 7236_CR11
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2008.284
– volume: 20
  start-page: 1825
  issue: 7
  year: 2020
  ident: 7236_CR8
  publication-title: Sensors (Basel)
  doi: 10.3390/s20071825
– volume: 40
  start-page: 1
  issue: 09
  year: 2019
  ident: 7236_CR33
  publication-title: Comput. Eng. Des.
  doi: 10.16208/j.issn1000-7024.2019.09.038
– year: 2020
  ident: 7236_CR63
  publication-title: Electron. Lett.
  doi: 10.1049/el.2020.2148
– volume: 33
  start-page: 12
  issue: 3
  year: 2020
  ident: 7236_CR4
  publication-title: Ind. Control Comput.
  doi: 10.3969/j.issn.1001-182X.2020.03.005
– volume: 1
  start-page: 585
  year: 2017
  ident: 7236_CR52
  publication-title: IEEE International Conference on Multimedia & Expo Workshops
  doi: 10.1109/ICMEW.2017.8026287
– ident: 7236_CR59
  doi: 10.1109/CVPR.2017.387
– volume: 43
  start-page: 16
  issue: 11
  year: 2019
  ident: 7236_CR24
  publication-title: Inf. Technol.
  doi: 10.13274/j.cnki.hdzj.2019.11.004
– volume: 17
  start-page: 1261
  issue: 6
  year: 2017
  ident: 7236_CR23
  publication-title: Sens. (Basel).
  doi: 10.3390/s17061261
– volume: 1187
  start-page: 42027
  year: 2019
  ident: 7236_CR21
  publication-title: J. Phys.: Conf. Ser.
  doi: 10.1088/1742-6596/1187/4/042027
– volume: 40
  start-page: 3007
  issue: 12
  year: 2018
  ident: 7236_CR27
  publication-title: IEEE Trans. Pattern. Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2017.2771306
– volume: 42
  start-page: 2684
  issue: 10
  year: 2019
  ident: 7236_CR44
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI)
  doi: 10.1109/tpami.2019.2916873
– volume: 24
  start-page: 624
  issue: 5
  year: 2017
  ident: 7236_CR60
  publication-title: IEEE Signal Process. Lett.
  doi: 10.1109/LSP.2017.2678539
– ident: 7236_CR43
  doi: 10.1109/CVPR.2016.115
– ident: 7236_CR55
  doi: 10.1109/ICPR.2018.8545247
– volume: 39
  start-page: 677
  issue: 4
  year: 2017
  ident: 7236_CR29
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2016.2599174
– ident: 7236_CR47
– volume: 29
  start-page: 3835
  year: 2020
  ident: 7236_CR9
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2965299
– ident: 7236_CR49
  doi: 10.1007/978-3-319-46487-9_50
– volume: 1
  start-page: 16
  year: 2019
  ident: 7236_CR25
  publication-title: IEEE
  doi: 10.1109/SIBGRAPI.2019.00011
– volume: 28
  start-page: 581
  year: 2019
  ident: 7236_CR31
  publication-title: IEEE/ACM Trans. Audio Speech Language Process.
  doi: 10.1109/TASKP.2019.2959251
– volume: 20
  start-page: 2226
  issue: 8
  year: 2020
  ident: 7236_CR18
  publication-title: Sensors (Basel).
  doi: 10.3390/s20082226
– volume: 20
  start-page: 2886
  issue: 10
  year: 2020
  ident: 7236_CR2
  publication-title: Sens. (Basel, Switzerland).
  doi: 10.3390/s20102886
– ident: 7236_CR6
  doi: 10.1109/CVPR.2005.177
– ident: 7236_CR54
  doi: 10.1109/ICCV.2017.233
– ident: 7236_CR12
  doi: 10.1109/AUTEEE.2018.8720753
– volume: 14
  start-page: e212320
  issue: 2
  year: 2019
  ident: 7236_CR41
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0212320
– ident: 7236_CR62
  doi: 10.1109/SIBGRAPI.2019.00011
– volume: 1
  start-page: 1010
  year: 2016
  ident: 7236_CR50
  publication-title: IEEE Comput. Soc.
  doi: 10.1109/CVPR.2016.115
– year: 2021
  ident: 7236_CR37
  publication-title: Appl. Soft Comput.
  doi: 10.1016/j.asoc.2021.107728
– volume: 414
  start-page: 90
  year: 2020
  ident: 7236_CR57
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.07.068
– ident: 7236_CR36
  doi: 10.1109/WCSP.2018.8555945
– volume: 42
  start-page: 413
  issue: 2
  year: 2020
  ident: 7236_CR14
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/tpami.2018.2880750
SSID ssib048395113
ssj0001916267
ssj0061873
Score 2.3016999
Snippet Human action recognition (HAR) technology is currently of significant interest. The traditional HAR methods depend on the time and space of the video stream...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2331
SubjectTerms Accuracy
Computational efficiency
Engineering
Human activity recognition
Human motion
Humanities and Social Sciences
multidisciplinary
Networks
Real time
Research Article-Computer Engineering and Computer Science
Response time
Response time (computers)
Science
Three dimensional models
Training
Video data
Title Human Action Recognition Research Based on Fusion TS-CNN and LSTM Networks
URI https://link.springer.com/article/10.1007/s13369-022-07236-z
https://www.proquest.com/docview/2774560092
Volume 48
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVEBS
  databaseName: EBSCOhost Academic Search Ultimate
  customDbUrl: https://search.ebscohost.com/login.aspx?authtype=ip,shib&custid=s3936755&profile=ehost&defaultdb=asn
  eissn: 2191-4281
  dateEnd: 20241105
  omitProxy: true
  ssIdentifier: ssj0001916267
  issn: 2193-567X
  databaseCode: ABDBF
  dateStart: 20041001
  isFulltext: true
  titleUrlDefault: https://search.ebscohost.com/direct.asp?db=asn
  providerName: EBSCOhost
– providerCode: PRVFQY
  databaseName: GFMER Free Medical Journals
  customDbUrl:
  eissn: 2191-4281
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0061873
  issn: 2193-567X
  databaseCode: GX1
  dateStart: 20020101
  isFulltext: true
  titleUrlDefault: http://www.gfmer.ch/Medical_journals/Free_medical.php
  providerName: Geneva Foundation for Medical Education and Research
– providerCode: PRVLSH
  databaseName: SpringerLink Journals
  customDbUrl:
  mediaType: online
  eissn: 2191-4281
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001916267
  issn: 2193-567X
  databaseCode: AFBBN
  dateStart: 20110101
  isFulltext: true
  providerName: Library Specific Holdings
– providerCode: PRVAVX
  databaseName: SpringerLINK - Czech Republic Consortium
  customDbUrl:
  eissn: 2191-4281
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0061873
  issn: 2193-567X
  databaseCode: AGYKE
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://link.springer.com
  providerName: Springer Nature
– providerCode: PRVAVX
  databaseName: SpringerLink Journals (ICM)
  customDbUrl:
  eissn: 2191-4281
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0061873
  issn: 2193-567X
  databaseCode: U2A
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://www.springerlink.com/journals/
  providerName: Springer Nature
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07b9swECYaZ2mHok_UbRpw6OaykEm9ODpBHSNIPDQOYHQRSIkEChRukdiLf33v-JAUty6SLIJMU4Kt-3SPj7w7Qj4VPOe20ZxpW2iWapuwMqsbZgXEHgYMQqmQGric57Pr9HyZLTsyx2WXrPWXevvPvJLHSBXGQK6YJfsAybY3hQE4B_nCESQMx3vJ2DPwE9_t-1vcCuTOYyoDGKkGFwSmG6TFRosrdjqfuxWDi6vFJeb74s6s276POrlRWIi8LSqBGxFj9g9eaLoShuifom3v8QnfPaM62_zocdKOjz1TwUoGkoGLuC95h2Qc7avBBeoKVJ9gWe4a34NliWMQoXLflCXq27Ts4Yr3lWdM3jLhoy80-ZeST0LSsxC5ZJiNkBRc5GzbmbR2o2FXlBknVzC5cpOr7QE55GAIkgE5nExPTuZRCaXgMYITKjqiDhxo7joRt_8wJF_5FMzdX3HXwemilp2Fdue_LF6Q5yHwoBOPopfkiVm9Is965Shfk3OHJ-rxRHt4ohFP1OGJwojHE_V4ogALiniiEU9vyPX06-J0xkKvDVaDEl4zA26jEDYTvFC80Y22pRWF1lmidWkVthUwSjcirzMptSys1ULJ1GCBSogRuHhLBqtfK_OO0MIKJZQtpB6rtM6NrJXMJE8017Uok2ZIxvEBVXUoRI_9UH5W-6U1JKP2mt--DMt_Zx_F516Fd-W24hDooHsv-ZB8jrLovt5_t_cPm_6BPO3eoCMyWN9szEdwXNf6OEDtmBycLcd_AN99jz8
linkProvider Library Specific Holdings
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Human+Action+Recognition+Research+Based+on+Fusion+TS-CNN+and+LSTM+Networks&rft.jtitle=Arabian+journal+for+science+and+engineering+%282011%29&rft.au=Zan%2C+Hui&rft.au=Zhao%2C+Gang&rft.date=2023-02-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=2193-567X&rft.eissn=2191-4281&rft.volume=48&rft.issue=2&rft.spage=2331&rft.epage=2345&rft_id=info:doi/10.1007%2Fs13369-022-07236-z&rft.externalDocID=10_1007_s13369_022_07236_z
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2193-567X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2193-567X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2193-567X&client=summon