B2C-AFM: Bi-directional Co-Temporal and Cross-Spatial Attention Fusion Model for Human Action Recognition

Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model generalization by integrating multiple homogeneous modalities, including RGB images, human poses, and optical flows. Furthermore, contextual interacti...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 32; p. 1
Main Authors Guo, Fangtai, Jin, Tianlei, Zhu, Shiqiang, Xi, Xiangming, Wang, Wen, Meng, Qiwei, Song, Wei, Zhu, Jiakai
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1057-7149
1941-0042
1941-0042
DOI10.1109/TIP.2023.3308750

Cover

Abstract Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model generalization by integrating multiple homogeneous modalities, including RGB images, human poses, and optical flows. Furthermore, contextual interactions and out-of-context sign languages have been validated to depend on scene category and human per se. Those attempts to integrate appearance features and human poses have shown positive results. However, with human poses' spatial errors and temporal ambiguities, existing methods are subject to poor scalability, limited robustness, and sub-optimal models. In this paper, inspired by the assumption that different modalities may maintain temporal consistency and spatial complementarity, we present a novel Bi-directional Co-temporal and Cross-spatial Attention Fusion Model (B2C-AFM). Our model is characterized by the asynchronous fusion strategy of multi-modal features along temporal and spatial dimensions. Besides, the novel explicit motion-oriented pose representations called Limb Flow Fields (Lff) are explored to alleviate the temporal ambiguity regarding human poses. Experiments on publicly available datasets validate our contributions. Abundant ablation studies experimentally show that B2C-AFM achieves robust performance across seen and unseen human actions. The codes are available here 1 .
AbstractList Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model generalization by integrating multiple homogeneous modalities, including RGB images, human poses, and optical flows. Furthermore, contextual interactions and out-of-context sign languages have been validated to depend on scene category and human per se. Those attempts to integrate appearance features and human poses have shown positive results. However, with human poses' spatial errors and temporal ambiguities, existing methods are subject to poor scalability, limited robustness, and sub-optimal models. In this paper, inspired by the assumption that different modalities may maintain temporal consistency and spatial complementarity, we present a novel Bi-directional Co-temporal and Cross-spatial Attention Fusion Model (B2C-AFM). Our model is characterized by the asynchronous fusion strategy of multi-modal features along temporal and spatial dimensions. Besides, the novel explicit motion-oriented pose representations called Limb Flow Fields (Lff) are explored to alleviate the temporal ambiguity regarding human poses. Experiments on publicly available datasets validate our contributions. Abundant ablation studies experimentally show that B2C-AFM achieves robust performance across seen and unseen human actions. The codes are available at https://github.com/gftww/B2C.git.Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model generalization by integrating multiple homogeneous modalities, including RGB images, human poses, and optical flows. Furthermore, contextual interactions and out-of-context sign languages have been validated to depend on scene category and human per se. Those attempts to integrate appearance features and human poses have shown positive results. However, with human poses' spatial errors and temporal ambiguities, existing methods are subject to poor scalability, limited robustness, and sub-optimal models. In this paper, inspired by the assumption that different modalities may maintain temporal consistency and spatial complementarity, we present a novel Bi-directional Co-temporal and Cross-spatial Attention Fusion Model (B2C-AFM). Our model is characterized by the asynchronous fusion strategy of multi-modal features along temporal and spatial dimensions. Besides, the novel explicit motion-oriented pose representations called Limb Flow Fields (Lff) are explored to alleviate the temporal ambiguity regarding human poses. Experiments on publicly available datasets validate our contributions. Abundant ablation studies experimentally show that B2C-AFM achieves robust performance across seen and unseen human actions. The codes are available at https://github.com/gftww/B2C.git.
Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model generalization by integrating multiple homogeneous modalities, including RGB images, human poses, and optical flows. Furthermore, contextual interactions and out-of-context sign languages have been validated to depend on scene category and human per se. Those attempts to integrate appearance features and human poses have shown positive results. However, with human poses’ spatial errors and temporal ambiguities, existing methods are subject to poor scalability, limited robustness, and sub-optimal models. In this paper, inspired by the assumption that different modalities may maintain temporal consistency and spatial complementarity, we present a novel Bi-directional Co-temporal and Cross-spatial Attention Fusion Model (B2C-AFM). Our model is characterized by the asynchronous fusion strategy of multi-modal features along temporal and spatial dimensions. Besides, the novel explicit motion-oriented pose representations called Limb Flow Fields (Lff) are explored to alleviate the temporal ambiguity regarding human poses. Experiments on publicly available datasets validate our contributions. Abundant ablation studies experimentally show that B2C-AFM achieves robust performance across seen and unseen human actions. The codes are available at https://github.com/gftww/B2C.git .
Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model generalization by integrating multiple homogeneous modalities, including RGB images, human poses, and optical flows. Furthermore, contextual interactions and out-of-context sign languages have been validated to depend on scene category and human per se. Those attempts to integrate appearance features and human poses have shown positive results. However, with human poses' spatial errors and temporal ambiguities, existing methods are subject to poor scalability, limited robustness, and sub-optimal models. In this paper, inspired by the assumption that different modalities may maintain temporal consistency and spatial complementarity, we present a novel Bi-directional Co-temporal and Cross-spatial Attention Fusion Model (B2C-AFM). Our model is characterized by the asynchronous fusion strategy of multi-modal features along temporal and spatial dimensions. Besides, the novel explicit motion-oriented pose representations called Limb Flow Fields (Lff) are explored to alleviate the temporal ambiguity regarding human poses. Experiments on publicly available datasets validate our contributions. Abundant ablation studies experimentally show that B2C-AFM achieves robust performance across seen and unseen human actions. The codes are available here 1 .
Author Meng, Qiwei
Zhu, Jiakai
Guo, Fangtai
Zhu, Shiqiang
Wang, Wen
Jin, Tianlei
Xi, Xiangming
Song, Wei
Author_xml – sequence: 1
  givenname: Fangtai
  orcidid: 0000-0002-4749-9908
  surname: Guo
  fullname: Guo, Fangtai
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
– sequence: 2
  givenname: Tianlei
  surname: Jin
  fullname: Jin, Tianlei
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
– sequence: 3
  givenname: Shiqiang
  orcidid: 0000-0002-5687-4001
  surname: Zhu
  fullname: Zhu, Shiqiang
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
– sequence: 4
  givenname: Xiangming
  orcidid: 0000-0003-2786-8144
  surname: Xi
  fullname: Xi, Xiangming
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
– sequence: 5
  givenname: Wen
  surname: Wang
  fullname: Wang, Wen
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
– sequence: 6
  givenname: Qiwei
  surname: Meng
  fullname: Meng, Qiwei
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
– sequence: 7
  givenname: Wei
  orcidid: 0000-0002-0828-7486
  surname: Song
  fullname: Song, Wei
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
– sequence: 8
  givenname: Jiakai
  surname: Zhu
  fullname: Zhu, Jiakai
  organization: Research Center for Intelligent Robotics, Zhejiang Lab, China
BookMark eNp9kDFPwzAQRi0EEm1hZ2CIxMLicrZjO2ErEYVKIBCUOXIcG7lK42InA_-ehDIgBqb7ZL3vrHtTdNj61iB0RmBOCORX69XznAJlc8YgkxwO0ITkKcEAKT0cMnCJJUnzYzSNcQNAUk7EBLkbWuDF8vE6uXG4dsHozvlWNUnh8dpsdz4MWbV1UgQfI37dqc4NL4uuM-1IJss-juPR16ZJrA_Jfb9VbbL43pO8GO3fWzfmE3RkVRPN6c-cobfl7bq4xw9Pd6ti8YA1o7zDxkqrM5uK2oJQVQ1ESbBZWsuqVlxDZitSSQOcayIF1coKxahSVZozkmnCZuhyv3cX_EdvYlduXdSmaVRrfB9LmvFcABMZG9CLP-jG92G4fqQEFUzmjA8U7Ck9KgjGlrvgtip8lgTK0X05uC9H9-WP-6Ei_lS069RooQvKNf8Vz_dFZ4z59Q9lPJOUfQGc9ZKG
CODEN IIPRE4
CitedBy_id crossref_primary_10_1016_j_eswa_2025_126965
crossref_primary_10_1109_TIP_2025_3533205
crossref_primary_10_1016_j_eswa_2023_123061
crossref_primary_10_1016_j_neucom_2024_127882
crossref_primary_10_1007_s11042_023_17626_6
crossref_primary_10_1016_j_knosys_2024_112523
crossref_primary_10_1007_s11042_024_20407_4
Cites_doi 10.1109/TIP.2020.3028207
10.1109/ICCV48922.2021.01112
10.1109/CVPR.2018.00734
10.1109/CVPR.2018.00539
10.1109/IJCNN.2019.8851734
10.1016/j.inffus.2022.03.001
10.1109/ICRA48506.2021.9562115
10.1016/j.patcog.2021.108044
10.1109/CVPR.2015.7298698
10.1007/978-3-030-58621-8_33
10.1109/ICCV.2019.00718
10.1109/CVPRW.2017.207
10.1109/CVPRW53098.2021.00372
10.1109/CVPR.2016.115
10.1109/TPAMI.2022.3183112
10.1109/ICCV.2019.00630
10.1109/ICPR48806.2021.9412009
10.1109/TNNLS.2023.3247103
10.1109/CVPR52688.2022.00298
10.1609/aaai.v33i01.33018303
10.1109/CVPR42600.2020.00119
10.1007/s11042-020-08806-9
10.1007/s11263-021-01446-y
10.1145/3372278.3390671
10.1109/CVPR46437.2021.01030
10.1109/TIP.2021.3056895
10.1109/WACV56688.2023.00333
10.1109/TIP.2020.2985219
10.1109/ICCV.2019.00289
10.1609/aaai.v32i1.12328
10.1109/CVPR.2017.143
10.1109/CVPR42600.2020.01047
10.1109/JSTSP.2020.2987728
10.1109/TITS.2021.3124981
10.1109/CVPR42600.2020.00046
10.1609/aaai.v32i1.11671
10.1109/CVPR.2016.90
10.1109/CVPRW.2019.00310
10.1109/ICCVW54120.2021.00234
10.1109/TIP.2021.3051495
10.1007/978-3-030-01234-2_1
10.1109/ICCV48922.2021.01124
10.1109/CVPR42600.2020.00028
10.1109/TIP.2015.2456412
10.1609/aaai.v36i1.19957
10.1109/CVPR.2016.213
10.1016/j.knosys.2022.108146
10.1155/2017/3090343
10.1109/CVPR.2019.01225
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2023.3308750
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 1
ExternalDocumentID 10_1109_TIP_2023_3308750
10235872
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: U21A20488
  funderid: 10.13039/501100001809
– fundername: Youth Foundation Project of Zhejiang Lab
  grantid: K2023NB0AA02
– fundername: Key Research Project of Zhejiang Lab
  grantid: G2021NB0AL03
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
F5P
HZ~
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
53G
5VS
AAYXX
ABFSI
AETIX
AGSQL
AI.
AIBXA
ALLEH
CITATION
E.L
EJD
H~9
ICLAB
IFJZH
VH1
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c325t-ef7fc8f46df06abd01a70f84d7bda5c08fb1b7e055c1762caf6a32aab49318c13
IEDL.DBID RIE
ISSN 1057-7149
1941-0042
IngestDate Thu Sep 25 08:54:44 EDT 2025
Mon Jun 30 08:39:14 EDT 2025
Wed Oct 01 02:58:53 EDT 2025
Thu Apr 24 23:09:04 EDT 2025
Wed Aug 27 02:21:31 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c325t-ef7fc8f46df06abd01a70f84d7bda5c08fb1b7e055c1762caf6a32aab49318c13
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-2786-8144
0000-0002-4749-9908
0000-0002-0828-7486
0000-0002-5687-4001
0000-0001-8682-7946
0009-0009-1644-2341
PQID 2862637935
PQPubID 85429
PageCount 1
ParticipantIDs ieee_primary_10235872
proquest_journals_2862637935
crossref_citationtrail_10_1109_TIP_2023_3308750
proquest_miscellaneous_2859603683
crossref_primary_10_1109_TIP_2023_3308750
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-01-01
PublicationDateYYYYMMDD 2023-01-01
PublicationDate_xml – month: 01
  year: 2023
  text: 2023-01-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref15
ref59
ref14
ref58
ref53
ref52
ref11
Doering (ref40)
ref55
ref10
ref54
ref17
ref16
ref19
ref18
ref51
ref50
Ren (ref56); 28
Azagra (ref5)
ref46
ref45
ref48
ref47
ref41
ref44
ref43
ref49
ref8
ref7
ref9
ref4
ref3
ref6
Simonyan (ref26); 27
ref36
ref31
Mazari (ref20)
ref30
ref33
ref32
Kay (ref12) 2017
ref2
ref1
ref39
ref38
ref23
ref25
ref22
ref21
Li (ref34) 2019
ref28
ref27
ref29
Li (ref37) 2021
Hsieh (ref42); 32
Xiao (ref35) 2020
Xu (ref24) 2023
ref60
References_xml – ident: ref18
  doi: 10.1109/TIP.2020.3028207
– ident: ref36
  doi: 10.1109/ICCV48922.2021.01112
– ident: ref39
  doi: 10.1109/CVPR.2018.00734
– ident: ref9
  doi: 10.1109/CVPR.2018.00539
– ident: ref41
  doi: 10.1109/IJCNN.2019.8851734
– ident: ref44
  doi: 10.1016/j.inffus.2022.03.001
– ident: ref10
  doi: 10.1109/ICRA48506.2021.9562115
– year: 2017
  ident: ref12
  article-title: The kinetics human action video dataset
  publication-title: arXiv:1705.06950
– year: 2020
  ident: ref35
  article-title: Audiovisual SlowFast networks for video recognition
  publication-title: arXiv:2001.08740
– ident: ref17
  doi: 10.1016/j.patcog.2021.108044
– ident: ref11
  doi: 10.1109/CVPR.2015.7298698
– ident: ref46
  doi: 10.1007/978-3-030-58621-8_33
– year: 2023
  ident: ref24
  article-title: Pyramid self-attention polymerization learning for semi-supervised skeleton-based action recognition
  publication-title: arXiv:2302.02327
– ident: ref7
  doi: 10.1109/ICCV.2019.00718
– ident: ref50
  doi: 10.1109/CVPRW.2017.207
– ident: ref6
  doi: 10.1109/CVPRW53098.2021.00372
– ident: ref13
  doi: 10.1109/CVPR.2016.115
– ident: ref14
  doi: 10.1109/TPAMI.2022.3183112
– ident: ref2
  doi: 10.1109/ICCV.2019.00630
– ident: ref21
  doi: 10.1109/ICPR48806.2021.9412009
– start-page: 1
  volume-title: Proc. 29th Brit. Mach. Vis. Conf. (BMVC)
  ident: ref40
  article-title: JointFlow: Temporal flow fields for multi person pose tracking
– ident: ref23
  doi: 10.1109/TNNLS.2023.3247103
– ident: ref25
  doi: 10.1109/CVPR52688.2022.00298
– ident: ref58
  doi: 10.1609/aaai.v33i01.33018303
– ident: ref15
  doi: 10.1109/CVPR42600.2020.00119
– ident: ref3
  doi: 10.1007/s11042-020-08806-9
– ident: ref48
  doi: 10.1007/s11263-021-01446-y
– ident: ref33
  doi: 10.1145/3372278.3390671
– volume: 28
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref56
  article-title: Faster R-CNN: Towards real-time object detection with region proposal networks
– year: 2019
  ident: ref34
  article-title: HAKE: Human activity knowledge engine
  publication-title: arXiv:1904.06539
– volume: 27
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref26
  article-title: Two-stream convolutional networks for action recognition in videos
– ident: ref57
  doi: 10.1109/CVPR46437.2021.01030
– start-page: 1
  volume-title: Proc. Brit. Mach. Vis. Conf. (BMVC)
  ident: ref20
  article-title: MLGCN: Multi-Laplacian graph convolutional networks for human action recognition
– ident: ref30
  doi: 10.1109/TIP.2021.3056895
– ident: ref54
  doi: 10.1109/WACV56688.2023.00333
– ident: ref59
  doi: 10.1109/TIP.2020.2985219
– ident: ref4
  doi: 10.1109/ICCV.2019.00289
– ident: ref16
  doi: 10.1609/aaai.v32i1.12328
– ident: ref28
  doi: 10.1109/CVPR.2017.143
– ident: ref31
  doi: 10.1109/CVPR42600.2020.01047
– ident: ref45
  doi: 10.1109/JSTSP.2020.2987728
– ident: ref38
  doi: 10.1109/TITS.2021.3124981
– ident: ref60
  doi: 10.1109/CVPR42600.2020.00046
– ident: ref43
  doi: 10.1609/aaai.v32i1.11671
– ident: ref1
  doi: 10.1109/CVPR.2016.90
– start-page: 1
  volume-title: Proc. NIPS, Workshop Future Interact. Learn. Mach.
  ident: ref5
  article-title: A multimodal human-robot interaction dataset
– volume: 32
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref42
  article-title: One-shot object detection with co-attention and co-excitation
– ident: ref8
  doi: 10.1109/CVPRW.2019.00310
– ident: ref19
  doi: 10.1109/ICCVW54120.2021.00234
– ident: ref29
  doi: 10.1109/TIP.2021.3051495
– year: 2021
  ident: ref37
  article-title: SimCC: A simple coordinate classification perspective for human pose estimation
  publication-title: arXiv:2107.03332
– ident: ref47
  doi: 10.1007/978-3-030-01234-2_1
– ident: ref52
  doi: 10.1109/ICCV48922.2021.01124
– ident: ref55
  doi: 10.1109/CVPR42600.2020.00028
– ident: ref27
  doi: 10.1109/TIP.2015.2456412
– ident: ref51
  doi: 10.1609/aaai.v36i1.19957
– ident: ref32
  doi: 10.1109/CVPR.2016.213
– ident: ref53
  doi: 10.1016/j.knosys.2022.108146
– ident: ref22
  doi: 10.1155/2017/3090343
– ident: ref49
  doi: 10.1109/CVPR.2019.01225
SSID ssj0014516
Score 2.5127923
Snippet Human Action Recognition plays a driving engine of many human-computer interaction applications. Most current researches focus on improving the model...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1
SubjectTerms Ablation
B2C-AFM
Color imagery
fusion model
homogeneous modalities
Human action recognition
Human activity recognition
limb flow fields
Optical flow (image analysis)
Title B2C-AFM: Bi-directional Co-Temporal and Cross-Spatial Attention Fusion Model for Human Action Recognition
URI https://ieeexplore.ieee.org/document/10235872
https://www.proquest.com/docview/2862637935
https://www.proquest.com/docview/2859603683
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELagJzhQKEUsFGQkLhycddbxI9y2K1YFqRVCW6m3yE9pxSqpaHLpr6_HTlYrEIhbpPU6jmY8Hs838w1CH7UJwRnjSHRvGamUK4lSXJAq6pM1wFlEoTj58kpcXFffbvjNWKyeamG89yn5zBfwmLB819kBQmVzoBngSkaL-1gqkYu19pABdJxN0CaXREa_f8IkaT3ffP1eQJvwggH_HZTYH5xBqanKH5Y4HS_rY3Q1LSxnlfwsht4U9v43zsb_Xvlz9Gx0NPEya8YL9Mi3J-h4dDrxuKXvTtDTA0bCl2h7vliR5fryMz7fknzcpVghXnVkk1msdli3Dq_gywg0NN7CW_o-p03i9QDhNwwt1nY4OsQ4oQR4mebBP6Z0pa49RdfrL5vVBRm7MRDLFrwnPshgVaiEC1Ro42ipJQ2qctI4zS1VwZRGesq5LaOFtToIzRZam6qOdsOW7BU6arvWv0ZYSwHoJvOBhsrVolaqFMarOEE0MFLM0HyST2NHqnLomLFr0pWF1k2UaAMSbUaJztCn_T9uM03HP8aegoAOxmXZzNDZpAPNuJHvmgXc-Fg0YnyGPux_jlsQcBXd-m6AMTzeA5lQ7M1fpn6LnsAKcujmDB31vwb_LjozvXmflPgBo_juzQ
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9NAEF6hcgAOFEoR6QMWiQuHdex4X-aWRkQpNBFCqdSbtU8pamQjal_667uztqOICsTNUjbrtWZ2dna-mW8Q-qS091ZrS4J7mxMqbUakZJzQoE9GA2dRCsXJyxVfXNNvN-ymL1aPtTDOuZh85hJ4jFi-rU0LobIx0AwwKYLFfcoopawr19qBBtBzNoKbTBARPP8BlUyL8fryRwKNwpMcGPCgyH7vFIptVR7Z4njAzA_Ralhal1dym7SNTsz9H6yN_732V-hl72riaacbr9ETVx2hw97txP2mvjtCL_Y4Cd-gzcVkRqbz5Rd8sSHdgRejhXhWk3XHY7XFqrJ4Bl9GoKXxBt7SNF3iJJ63EIDD0GRti4NLjCNOgKdxHvxzSFiqq2N0Pf-6ni1I34-BmHzCGuK88EZ6yq1PudI2zZRIvaRWaKuYSaXXmRYuZcxkwcYa5bnKJ0ppWgTLYbL8LTqo6sq9Q1gJDvhm7nzqqS14IWXGtZNhgmBiBB-h8SCf0vRk5dAzY1vGS0talEGiJUi07CU6Qp93__jVEXX8Y-wxCGhvXCebETobdKDst_JdOYE7Xx7MGBuhj7ufwyYEZEVVrm5hDAs3wZzL_OQvU39Azxbr5VV5dbn6foqew2q6QM4ZOmh-t-48uDaNfh8V-gFKtfIa
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=B2C-AFM%3A+Bi-directional+Co-Temporal+and+Cross-Spatial+Attention+Fusion+Model+for+Human+Action+Recognition&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Guo%2C+Fangtai&rft.au=Jin%2C+Tianlei&rft.au=Zhu%2C+Shiqiang&rft.au=Xi%2C+Xiangming&rft.date=2023-01-01&rft.pub=IEEE&rft.issn=1057-7149&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTIP.2023.3308750&rft.externalDocID=10235872
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon