Hypergraph-Based Multi-View Action Recognition Using Event Cameras

Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recen...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 46; no. 10; pp. 6610 - 6622
Main Authors Gao, Yue, Lu, Jiaxuan, Li, Siqi, Li, Yipeng, Du, Shaoyi
Format Journal Article
LanguageEnglish
Published United States IEEE 01.10.2024
Subjects
Online AccessGet full text
ISSN0162-8828
1939-3539
2160-9292
1939-3539
DOI10.1109/TPAMI.2024.3382117

Cover

Abstract Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recently, event cameras have emerged as innovative bio-inspired sensors, leading to advancements in event-based action recognition. However, existing works predominantly focus on single-view scenarios, leaving a gap in multi-view event data exploitation, particularly in challenges like information deficit and semantic misalignment. To bridge this gap, we introduce HyperMV , a multi-view event-based action recognition framework. HyperMV converts discrete event data into frame-like representations and extracts view-related features using a shared convolutional network. By treating segments as vertices and constructing hyperedges using rule-based and KNN-based strategies, a multi-view hypergraph neural network that captures relationships across viewpoint and temporal features is established. The vertex attention hypergraph propagation is also introduced for enhanced feature fusion. To prompt research in this area, we present the largest multi-view event-based action dataset <inline-formula><tex-math notation="LaTeX">\mathbf{THU}^{\mathbf{MV-EACT}}\mathbf{-50}</tex-math> <mml:math><mml:mrow><mml:msup><mml:mi mathvariant="bold">THU</mml:mi><mml:mrow><mml:mi mathvariant="bold">MV</mml:mi><mml:mo>-</mml:mo><mml:mi mathvariant="bold">EACT</mml:mi></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:mn mathvariant="bold">50</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="lu-ieq1-3382117.gif"/> </inline-formula>, comprising 50 actions from 6 viewpoints, which surpasses existing datasets by over tenfold. Experimental results show that HyperMV significantly outperforms baselines in both cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts in frame-based multi-view action recognition.
AbstractList Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recently, event cameras have emerged as innovative bio-inspired sensors, leading to advancements in event-based action recognition. However, existing works predominantly focus on single-view scenarios, leaving a gap in multi-view event data exploitation, particularly in challenges like information deficit and semantic misalignment. To bridge this gap, we introduce HyperMV , a multi-view event-based action recognition framework. HyperMV converts discrete event data into frame-like representations and extracts view-related features using a shared convolutional network. By treating segments as vertices and constructing hyperedges using rule-based and KNN-based strategies, a multi-view hypergraph neural network that captures relationships across viewpoint and temporal features is established. The vertex attention hypergraph propagation is also introduced for enhanced feature fusion. To prompt research in this area, we present the largest multi-view event-based action dataset <inline-formula><tex-math notation="LaTeX">\mathbf{THU}^{\mathbf{MV-EACT}}\mathbf{-50}</tex-math> <mml:math><mml:mrow><mml:msup><mml:mi mathvariant="bold">THU</mml:mi><mml:mrow><mml:mi mathvariant="bold">MV</mml:mi><mml:mo>-</mml:mo><mml:mi mathvariant="bold">EACT</mml:mi></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:mn mathvariant="bold">50</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href="lu-ieq1-3382117.gif"/> </inline-formula>, comprising 50 actions from 6 viewpoints, which surpasses existing datasets by over tenfold. Experimental results show that HyperMV significantly outperforms baselines in both cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts in frame-based multi-view action recognition.
Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recently, event cameras have emerged as innovative bio-inspired sensors, leading to advancements in event-based action recognition. However, existing works predominantly focus on single-view scenarios, leaving a gap in multi-view event data exploitation, particularly in challenges like information deficit and semantic misalignment. To bridge this gap, we introduce HyperMV, a multi-view event-based action recognition framework. HyperMV converts discrete event data into frame-like representations and extracts view-related features using a shared convolutional network. By treating segments as vertices and constructing hyperedges using rule-based and KNN-based strategies, a multi-view hypergraph neural network that captures relationships across viewpoint and temporal features is established. The vertex attention hypergraph propagation is also introduced for enhanced feature fusion. To prompt research in this area, we present the largest multi-view event-based action dataset THUMV-EACT-50, comprising 50 actions from 6 viewpoints, which surpasses existing datasets by over tenfold. Experimental results show that HyperMV significantly outperforms baselines in both cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts in frame-based multi-view action recognition.Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recently, event cameras have emerged as innovative bio-inspired sensors, leading to advancements in event-based action recognition. However, existing works predominantly focus on single-view scenarios, leaving a gap in multi-view event data exploitation, particularly in challenges like information deficit and semantic misalignment. To bridge this gap, we introduce HyperMV, a multi-view event-based action recognition framework. HyperMV converts discrete event data into frame-like representations and extracts view-related features using a shared convolutional network. By treating segments as vertices and constructing hyperedges using rule-based and KNN-based strategies, a multi-view hypergraph neural network that captures relationships across viewpoint and temporal features is established. The vertex attention hypergraph propagation is also introduced for enhanced feature fusion. To prompt research in this area, we present the largest multi-view event-based action dataset THUMV-EACT-50, comprising 50 actions from 6 viewpoints, which surpasses existing datasets by over tenfold. Experimental results show that HyperMV significantly outperforms baselines in both cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts in frame-based multi-view action recognition.
Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recently, event cameras have emerged as innovative bio-inspired sensors, leading to advancements in event-based action recognition. However, existing works predominantly focus on single-view scenarios, leaving a gap in multi-view event data exploitation, particularly in challenges like information deficit and semantic misalignment. To bridge this gap, we introduce HyperMV, a multi-view event-based action recognition framework. HyperMV converts discrete event data into frame-like representations and extracts view-related features using a shared convolutional network. By treating segments as vertices and constructing hyperedges using rule-based and KNN-based strategies, a multi-view hypergraph neural network that captures relationships across viewpoint and temporal features is established. The vertex attention hypergraph propagation is also introduced for enhanced feature fusion. To prompt research in this area, we present the largest multi-view event-based action dataset THU -50, comprising 50 actions from 6 viewpoints, which surpasses existing datasets by over tenfold. Experimental results show that HyperMV significantly outperforms baselines in both cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts in frame-based multi-view action recognition.
Author Lu, Jiaxuan
Li, Siqi
Li, Yipeng
Gao, Yue
Du, Shaoyi
Author_xml – sequence: 1
  givenname: Yue
  orcidid: 0000-0002-4971-590X
  surname: Gao
  fullname: Gao, Yue
  email: gaoyue@tsinghua.edu.cn
  organization: BNRist, THUIBCS, BLBCI, School of Software, Tsinghua University, Beijing, China
– sequence: 2
  givenname: Jiaxuan
  orcidid: 0000-0003-3566-3050
  surname: Lu
  fullname: Lu, Jiaxuan
  email: lujiaxuan@pjlab.org.cn
  organization: Shanghai Artificial Intelligence Laboratory, Shanghai, China
– sequence: 3
  givenname: Siqi
  orcidid: 0000-0001-9720-826X
  surname: Li
  fullname: Li, Siqi
  email: lsq19@mails.tsinghua.edu.cn
  organization: BNRist, THUIBCS, BLBCI, School of Software, Tsinghua University, Beijing, China
– sequence: 4
  givenname: Yipeng
  orcidid: 0000-0001-9099-4077
  surname: Li
  fullname: Li, Yipeng
  email: liep@tsinghua.edu.cn
  organization: Department of Automation, BNRist, THUIBCS, BLBCI, Tsinghua University, Beijing, China
– sequence: 5
  givenname: Shaoyi
  orcidid: 0000-0002-7092-0596
  surname: Du
  fullname: Du, Shaoyi
  email: dushaoyi@xjtu.edu.cn
  organization: Department of Ultrasound, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38536691$$D View this record in MEDLINE/PubMed
BookMark eNp9kE1Lw0AQhhep2A_9AyKSo5fU_Uqye6yl2kKLIq3XZZNM6kqaxN1E6b83_VDEg6cZhuedYZ4-6hRlAQhdEjwkBMvb5dNoMRtSTPmQMUEJiU5Qj5IQ-5JK2kE9TELqC0FFF_Wde8OY8ACzM9RlImBhKEkP3U23Fdi11dWrf6cdpN6iyWvjvxj49EZJbcrCe4akXBdm36-cKdbe5AOK2hvrDVjtztFppnMHF8c6QKv7yXI89eePD7PxaO4njPLahyjBRIg0lABhHGmuhRBRBjSmAWdEUy1SwdJ2AEGYpXHAZZawmMg40yTTKRugm8PeypbvDbhabYxLIM91AWXjFGvfw1gKwlv0-og28QZSVVmz0Xarvv9uAXoAEls6ZyH7QQhWO7lqL1ft5Kqj3DYk_oQSU-udltpqk_8fvTpEDQD8usUFDgRnXyBohw8
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1016_j_image_2024_117244
Cites_doi 10.1109/ICCV.2019.00718
10.1007/978-3-319-66182-7_54
10.3389/fnins.2016.00405
10.1109/CVPR52688.2022.00298
10.1609/aaai.v32i1.12328
10.1109/IROS45743.2020.9341160
10.1109/CVPR.2016.90
10.1109/ICRA40945.2020.9197197
10.1109/CVPR.2018.00568
10.1142/S0129065709002002
10.1007/978-3-319-46448-0_31
10.1109/ICCV.2019.00630
10.1109/CVPR.2019.00108
10.1109/ICCV.2015.510
10.48550/arXiv.1606.09375
10.1109/TNNLS.2019.2945630
10.1016/j.neucom.2019.12.151
10.1109/CVMP.2009.19
10.1109/CVPR.2017.502
10.1109/ICPR48806.2021.9412991
10.1109/CVPR.2018.00151
10.1109/ICCV.2013.394
10.1109/TPAMI.2020.2986748
10.1109/CVPRW.2019.00214
10.1109/TKDE.2021.3108192
10.5244/C.31.16
10.1109/CVPR.2009.5206848
10.1109/TNNLS.2013.2273537
10.1109/isscc.2011.5746374
10.1007/978-3-030-58583-9_26
10.1109/CVPRW.2019.00217
10.1109/TPAMI.2022.3224051
10.7551/mitpress/7503.003.0205
10.1109/JSSC.2012.2230553
10.1609/aaai.v28i1.8916
10.1609/aaai.v33i01.33013558
10.1109/ICPR.2004.1334462
10.1109/DICTA.2011.77
10.1109/ICCV48922.2021.00444
10.1007/978-3-319-46478-7_23
10.1109/ICCV.2019.00209
10.1109/IJCNN.2005.1555942
10.1109/CVPR52688.2022.01931
10.1016/j.imavis.2021.104357
10.3390/s22197640
10.1109/TPAMI.2019.2916873
10.1109/WF-IoT48130.2020.9221355
10.1016/j.imavis.2020.104068
10.1109/CVPR.2016.213
10.1109/TPAMI.2011.52
10.1109/WACV56688.2023.00553
10.1007/s11263-015-0846-5
10.1109/ESSDERC.2016.7599576
10.1609/aaai.v34i04.5731
10.5244/C.30.108
10.1109/CVPR.2015.7299059
10.1007/978-3-030-01240-3_9
10.1145/3219819.3219980
10.1109/TIP.2023.3236144
10.1109/CVPR.2015.7298708
10.1007/s11263-022-01594-9
10.1109/ISCAS45731.2020.9181247
10.1145/3132734.3132739
10.1145/3240508.3240675
10.1109/ICCV.2015.522
10.1109/TPAMI.2020.3039374
10.1109/ICCV.2019.00631
10.1109/TNN.2008.2005605
10.1007/978-3-030-01240-3_28
10.1609/aaai.v32i1.11782
10.1109/CVPR.2014.339
10.1109/WACV56688.2023.00338
10.24963/ijcai.2021/240
10.1109/TPAMI.2023.3300741
10.1109/ISCAS.2019.8702581
10.1007/978-3-319-46484-8_2
10.3389/fnbot.2019.00038
10.1145/3582272
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
DOI 10.1109/TPAMI.2024.3382117
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Xplore
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 6622
ExternalDocumentID 38536691
10_1109_TPAMI_2024_3382117
10480584
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Science and Technology Major Project of China
  grantid: 2020AAA0108102
– fundername: National Natural Science Foundation of China; National Natural Science Funds of China
  grantid: 62088102; 62021002
  funderid: 10.13039/501100001809
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
9M8
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
ADRHT
AENEX
AETEA
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
FA8
HZ~
H~9
IBMZZ
ICLAB
IEDLZ
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNI
RNS
RXW
RZB
TAE
TN5
UHB
VH1
XJT
~02
AAYXX
CITATION
AAYOK
NPM
RIG
7X8
ID FETCH-LOGICAL-c324t-e7c0188d69ee6b7a4a8887fe2b25431a2a8d83dfe2e56fdb549fc3b19bfa1fad3
IEDL.DBID RIE
ISSN 0162-8828
1939-3539
IngestDate Sat Sep 27 21:20:08 EDT 2025
Thu Apr 03 07:03:34 EDT 2025
Wed Oct 01 02:24:14 EDT 2025
Thu Apr 24 23:11:10 EDT 2025
Wed Aug 27 02:00:05 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c324t-e7c0188d69ee6b7a4a8887fe2b25431a2a8d83dfe2e56fdb549fc3b19bfa1fad3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-4971-590X
0000-0001-9720-826X
0000-0003-3566-3050
0000-0001-9099-4077
0000-0002-7092-0596
PMID 38536691
PQID 3014009814
PQPubID 23479
PageCount 13
ParticipantIDs ieee_primary_10480584
pubmed_primary_38536691
crossref_primary_10_1109_TPAMI_2024_3382117
proquest_miscellaneous_3014009814
crossref_citationtrail_10_1109_TPAMI_2024_3382117
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-10-01
PublicationDateYYYYMMDD 2024-10-01
PublicationDate_xml – month: 10
  year: 2024
  text: 2024-10-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2024
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
ref56
ref15
ref59
ref14
ref58
ref53
ref11
ref10
Liu (ref32)
ref54
ref17
ref16
ref19
ref18
Li (ref90) 2019
Berner (ref12)
ref93
ref92
ref51
ref50
Duvenaud (ref47)
ref46
Soomro (ref69) 2012
ref45
ref48
ref42
ref86
ref85
ref44
ref88
ref43
ref87
Zhu (ref57)
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref82
ref81
ref40
ref84
ref83
Simonyan (ref25)
Bai (ref36) 2020
ref80
ref35
ref79
ref78
ref37
ref31
ref75
ref30
Han (ref52)
ref74
ref33
ref77
ref76
ref2
ref1
ref39
ref38
Nie (ref34)
ref71
ref70
ref73
ref72
Müller (ref55) 2023; 24
ref24
ref68
ref23
ref67
ref26
ref20
ref64
ref63
ref22
ref66
ref21
ref65
Kingma (ref89) 2014
ref28
Paszke (ref91) 2019; 32
ref27
ref29
ref60
ref62
Ghosh (ref41) 2019
ref61
References_xml – ident: ref22
  doi: 10.1109/ICCV.2019.00718
– ident: ref48
  doi: 10.1007/978-3-319-66182-7_54
– ident: ref76
  doi: 10.3389/fnins.2016.00405
– ident: ref8
  doi: 10.1109/CVPR52688.2022.00298
– ident: ref64
  doi: 10.1609/aaai.v32i1.12328
– ident: ref74
  doi: 10.1109/IROS45743.2020.9341160
– year: 2019
  ident: ref41
  article-title: Spatiotemporal filtering for event-based action recognition
– ident: ref87
  doi: 10.1109/CVPR.2016.90
– ident: ref3
  doi: 10.1109/ICRA40945.2020.9197197
– ident: ref82
  doi: 10.1109/CVPR.2018.00568
– ident: ref78
  doi: 10.1142/S0129065709002002
– ident: ref2
  doi: 10.1007/978-3-319-46448-0_31
– ident: ref29
  doi: 10.1109/ICCV.2019.00630
– ident: ref83
  doi: 10.1109/CVPR.2019.00108
– ident: ref23
  doi: 10.1109/ICCV.2015.510
– ident: ref46
  doi: 10.48550/arXiv.1606.09375
– ident: ref13
  doi: 10.1109/TNNLS.2019.2945630
– year: 2020
  ident: ref36
  article-title: Collaborative attention mechanism for multi-view action recognition
– ident: ref35
  doi: 10.1016/j.neucom.2019.12.151
– ident: ref68
  doi: 10.1109/CVMP.2009.19
– ident: ref1
  doi: 10.1109/CVPR.2017.502
– ident: ref42
  doi: 10.1109/ICPR48806.2021.9412991
– start-page: 2224
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref47
  article-title: Convolutional networks on graphs for learning molecular fingerprints
– ident: ref20
  doi: 10.1109/CVPR.2018.00151
– ident: ref31
  doi: 10.1109/ICCV.2013.394
– volume: 24
  start-page: 1
  year: 2023
  ident: ref55
  article-title: Graph clustering with graph neural networks
  publication-title: J. Mach. Learn. Res.
– ident: ref84
  doi: 10.1109/TPAMI.2020.2986748
– start-page: 8230
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref52
  article-title: G-mixup: Graph data augmentation for graph classification
– ident: ref86
  doi: 10.1109/CVPRW.2019.00214
– ident: ref54
  doi: 10.1109/TKDE.2021.3108192
– ident: ref85
  doi: 10.5244/C.31.16
– volume: 32
  start-page: 8026
  year: 2019
  ident: ref91
  article-title: Pytorch: An imperative style, high-performance deep learning library
  publication-title: Adv. Neural Inf. Process. Syst.
– start-page: 29476
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref57
  article-title: Neural Bellman-Ford networks: A general graph neural network framework for link prediction
– ident: ref88
  doi: 10.1109/CVPR.2009.5206848
– ident: ref81
  doi: 10.1109/TNNLS.2013.2273537
– ident: ref40
  doi: 10.1109/isscc.2011.5746374
– ident: ref10
  doi: 10.1007/978-3-030-58583-9_26
– ident: ref18
  doi: 10.1109/CVPRW.2019.00217
– ident: ref80
  doi: 10.1109/TPAMI.2022.3224051
– ident: ref65
  doi: 10.7551/mitpress/7503.003.0205
– ident: ref77
  doi: 10.1109/JSSC.2012.2230553
– ident: ref53
  doi: 10.1609/aaai.v28i1.8916
– ident: ref58
  doi: 10.48550/arXiv.1606.09375
– ident: ref66
  doi: 10.1609/aaai.v33i01.33013558
– ident: ref67
  doi: 10.1109/ICPR.2004.1334462
– ident: ref59
  doi: 10.1109/DICTA.2011.77
– ident: ref79
  doi: 10.1109/ICCV48922.2021.00444
– ident: ref63
  doi: 10.1007/978-3-319-46478-7_23
– ident: ref7
  doi: 10.1109/ICCV.2019.00209
– ident: ref44
  doi: 10.1109/IJCNN.2005.1555942
– ident: ref75
  doi: 10.1109/CVPR52688.2022.01931
– ident: ref39
  doi: 10.1016/j.imavis.2021.104357
– ident: ref60
  doi: 10.3390/s22197640
– ident: ref71
  doi: 10.1109/TPAMI.2019.2916873
– ident: ref43
  doi: 10.1109/WF-IoT48130.2020.9221355
– ident: ref61
  doi: 10.1016/j.imavis.2020.104068
– ident: ref27
  doi: 10.1109/CVPR.2016.213
– ident: ref30
  doi: 10.1109/TPAMI.2011.52
– year: 2014
  ident: ref89
  article-title: Adam: A method for stochastic optimization
– ident: ref93
  doi: 10.1109/WACV56688.2023.00553
– ident: ref6
  doi: 10.1007/s11263-015-0846-5
– ident: ref11
  doi: 10.1109/ESSDERC.2016.7599576
– ident: ref56
  doi: 10.1609/aaai.v34i04.5731
– ident: ref28
  doi: 10.5244/C.30.108
– year: 2012
  ident: ref69
  article-title: UCF101: A dataset of 101 human actions classes from videos in the wild
– ident: ref21
  doi: 10.1109/CVPR.2015.7299059
– ident: ref92
  doi: 10.1007/978-3-030-01240-3_9
– ident: ref51
  doi: 10.1145/3219819.3219980
– ident: ref62
  doi: 10.1109/TIP.2023.3236144
– ident: ref33
  doi: 10.1109/CVPR.2015.7298708
– ident: ref5
  doi: 10.1007/s11263-022-01594-9
– year: 2019
  ident: ref90
  article-title: An exponential learning rate schedule for deep learning
– start-page: 186
  volume-title: Proc. Symp. VLSI Circuits
  ident: ref12
  article-title: A 240× 180 10mw 12us latency sparse-output vision sensor for mobile applications
– ident: ref15
  doi: 10.1109/ISCAS45731.2020.9181247
– start-page: 1493
  volume-title: Proc. Int. Joint Conf. Artif. Intell.
  ident: ref32
  article-title: Learning discriminative representations from RGB-D video data
– ident: ref72
  doi: 10.1145/3132734.3132739
– ident: ref73
  doi: 10.1145/3240508.3240675
– ident: ref24
  doi: 10.1109/ICCV.2015.522
– start-page: 1881
  volume-title: Proc. Int. Joint Conf. Artif. Intell.
  ident: ref34
  article-title: Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification
– ident: ref49
  doi: 10.1109/TPAMI.2020.3039374
– ident: ref38
  doi: 10.1109/ICCV.2019.00631
– ident: ref45
  doi: 10.1109/TNN.2008.2005605
– ident: ref9
  doi: 10.1007/978-3-030-01240-3_28
– ident: ref50
  doi: 10.1609/aaai.v32i1.11782
– ident: ref70
  doi: 10.1109/CVPR.2014.339
– start-page: 568
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref25
  article-title: Two-stream convolutional networks for action recognition in videos
– ident: ref37
  doi: 10.1109/WACV56688.2023.00338
– ident: ref14
  doi: 10.24963/ijcai.2021/240
– ident: ref16
  doi: 10.1109/TPAMI.2023.3300741
– ident: ref19
  doi: 10.1109/ISCAS.2019.8702581
– ident: ref26
  doi: 10.1007/978-3-319-46484-8_2
– ident: ref17
  doi: 10.3389/fnbot.2019.00038
– ident: ref4
  doi: 10.1145/3582272
SSID ssj0014503
Score 2.5006583
Snippet Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 6610
SubjectTerms Cameras
dynamic vision sensor
event camera
Feature extraction
hypergraph neural network
Multi-view action recognition
Neural networks
Robot vision systems
Semantics
Task analysis
Vision sensors
Title Hypergraph-Based Multi-View Action Recognition Using Event Cameras
URI https://ieeexplore.ieee.org/document/10480584
https://www.ncbi.nlm.nih.gov/pubmed/38536691
https://www.proquest.com/docview/3014009814
Volume 46
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore
  customDbUrl:
  eissn: 2160-9292
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014503
  issn: 0162-8828
  databaseCode: RIE
  dateStart: 19790101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LSwMxEB6sJz1Y39YXK3iT1H0m2WMVSxUUkVZ6W5JNAqJspe0i-OudZHdLFRRvy5LsIzPJzJfMzAdwHuGiaBTVxHJFk1imgsggyQnXiZIhlzJ0nJH3D3Qwiu_GybhOVne5MFprF3ymu_bSneWrSV7arTKc4TH30WK2oMU4rZK1FkcGceJokNGFwSmOOKLJkPHTy-Fj7_4WsWAYdxGRIeSx1HsRGipK0-CbQXIMK787m87o9Nvw0HxuFWvy2i3nspt__qjk-O__2YSN2v30epW-bMGKLrah3VA7ePVM34b1pTqFO3A1QLQ6dbWtyRWaPeW5vF3y_KI_vJ7LjPCemkgkvHZxCN6NDaX0roXd9prtwqh_M7wekJp8geToY82JZrkfcK5oqjWVTMQCsTIzOpQufV6EgiseKbyhE2qURJxp8kgGqTQiMEJFe7BaTAp9AF7EDIsToRiOdpwbllIh_UDhWsCMj907EDQSyPK6MrklyHjLHELx08wJMLMCzGoBduBi0ee9qsvxZ-tdO_pLLauB78BZI-kMp5U9KxGFnpSzzCFPP-UBttmvVGDRu9Gcw1-eegRr9uVVyN8xrM6npT5B12UuT53KfgFt4-Yl
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1ZSwMxEB60PqgPHvWq5wq-SdY9stdjLZVV2yLSim9LsklAlFZ6IPjrnWR3SxUU35YlWbKZSWa-ZGY-gAsfN0UlQkk0VzShPGGEu0FOYhkI7sWce4YzstsL0wG9ew6ey2R1kwsjpTTBZ9LWj-YuX4zymT4qwxVOYwct5jKsBJTSoEjXml8a0MAQIaMTg4sckUSVI-MkV_2HZvcW0aBHbcRkCHo0-Z6PpioME_ebSTIcK7-7m8bs3GxCrxpwEW3yas-m3M4_f9Ry_PcfbcFG6YBazUJjtmFJDuuwWZE7WOVar8P6QqXCHbhOEa-OTXVrco2GT1gmc5c8vcgPq2lyI6zHKhYJn00kgtXWwZRWi-mDr8kuDG7a_VZKSvoFkqOXNSUyyh03jkWYSBnyiFGGaDlS0uMmgZ55LBaxL_CFDEIlOCJNlfvcTbhirmLC34PacDSUB2D5kYpowESEs01zFSUh444rcDeIlIPdG-BWEsjysja5psh4ywxGcZLMCDDTAsxKATbgct7nvajM8WfrXT37Cy2LiW_AeSXpDBeWvi1hQzmaTTKDPZ0kdrHNfqEC896V5hz-8tUzWE373U7Wue3dH8GaHkgRAHgMtel4Jk_QkZnyU6O-X6W56XI
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Hypergraph-Based+Multi-View+Action+Recognition+Using+Event+Cameras&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Gao%2C+Yue&rft.au=Lu%2C+Jiaxuan&rft.au=Li%2C+Siqi&rft.au=Li%2C+Yipeng&rft.date=2024-10-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=46&rft.issue=10&rft.spage=6610&rft.epage=6622&rft_id=info:doi/10.1109%2FTPAMI.2024.3382117&rft_id=info%3Apmid%2F38536691&rft.externalDocID=10480584
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon