Cross-Subject and Cross-Modal Transfer for Generalized Abnormal Gait Pattern Recognition

For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 32; no. 2; pp. 546 - 560
Main Authors Gu, Xiao, Guo, Yao, Deligianni, Fani, Lo, Benny, Yang, Guang-Zhong
Format Journal Article
LanguageEnglish
Published United States IEEE 01.02.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2020.3009448

Cover

Abstract For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.
AbstractList For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.
For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.
Author Guo, Yao
Deligianni, Fani
Yang, Guang-Zhong
Lo, Benny
Gu, Xiao
Author_xml – sequence: 1
  givenname: Xiao
  orcidid: 0000-0002-3015-5818
  surname: Gu
  fullname: Gu, Xiao
  email: xiao.gu17@imperial.ac.uk
  organization: Hamlyn Centre, Institute of Global Health Innovation, Imperial College London, London, U.K
– sequence: 2
  givenname: Yao
  orcidid: 0000-0001-8041-1245
  surname: Guo
  fullname: Guo, Yao
  email: yao.guo@imperial.ac.uk
  organization: Hamlyn Centre, Institute of Global Health Innovation, Imperial College London, London, U.K
– sequence: 3
  givenname: Fani
  orcidid: 0000-0003-1306-5017
  surname: Deligianni
  fullname: Deligianni, Fani
  email: fani.deligianni@glasgow.ac.uk
  organization: School of Computing Science, University of Glasgow, Glasgow, U.K
– sequence: 4
  givenname: Benny
  orcidid: 0000-0002-5080-108X
  surname: Lo
  fullname: Lo, Benny
  email: benny.lo@imperial.ac.uk
  organization: Hamlyn Centre, Institute of Global Health Innovation, Imperial College London, London, U.K
– sequence: 5
  givenname: Guang-Zhong
  surname: Yang
  fullname: Yang, Guang-Zhong
  email: gzy@sjtu.edu.cn
  organization: Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32726285$$D View this record in MEDLINE/PubMed
BookMark eNp9kU9P3DAQxa2KqlDKFygSitRLL9n6T2InR7QqC9KWVmUrcYsmzhh5lbXBdg7tp8fbXfbAAV_G8vzejPXeR3LkvENCPjM6Y4y231a3t8u7GaeczgSlbVU178gJZ5KXXDTN0eGu7o_JWYxrmo-ktazaD-RYcMUlb-oTcj8PPsbyburXqFMBbih2Lz_8AGOxCuCiwVAYH4oFOgww2n84FJe982GTiQXYVPyClDC44jdq_-Bsst59Iu8NjBHP9vWU_Ln6vppfl8ufi5v55bLUFVOp1EYoPqAAqBqojeoNZ4MxWkpURgyDgUqyujWCC6l62jPsgZpa1xxyV2hxSr7u5j4G_zRhTN3GRo3jCA79FDte8ZbWqpE8o19eoWs_BZd_l6lGVXkDY5m62FNTv8Ghewx2A-Fv9-JZBvgO0FujApoDwmi3zab7n023zabbZ5NFzSuRtgm2RqUAdnxber6TWkQ87GpZnRMW4hlM9ZwN
CODEN ITNNAL
CitedBy_id crossref_primary_10_3390_bioengineering10091101
crossref_primary_10_1016_j_bspc_2024_106771
crossref_primary_10_3390_s22135005
crossref_primary_10_1109_LSENS_2024_3389675
crossref_primary_10_1109_ACCESS_2024_3404456
crossref_primary_10_1109_TIM_2023_3265105
crossref_primary_10_1109_TNNLS_2021_3105595
crossref_primary_10_1109_TNSRE_2024_3429637
crossref_primary_10_1016_j_neucom_2021_12_004
crossref_primary_10_1109_JSEN_2024_3519564
crossref_primary_10_1109_TNNLS_2023_3331050
crossref_primary_10_1007_s11042_023_17195_8
crossref_primary_10_1109_TNNLS_2022_3154723
crossref_primary_10_1016_j_csbj_2025_02_001
crossref_primary_10_1111_exsy_13274
crossref_primary_10_3390_app12030986
crossref_primary_10_1016_j_medntd_2024_100341
crossref_primary_10_1109_TSMC_2024_3369071
crossref_primary_10_3389_fnagi_2022_916971
crossref_primary_10_1109_JBHI_2023_3337072
crossref_primary_10_1109_JBHI_2021_3107532
crossref_primary_10_3390_brainsci11081049
crossref_primary_10_1109_JSEN_2025_3526646
crossref_primary_10_1109_TNNLS_2022_3160159
crossref_primary_10_1109_TMRB_2022_3141313
crossref_primary_10_1109_TIM_2024_3412196
crossref_primary_10_1016_j_cmpb_2022_107016
crossref_primary_10_1109_JBHI_2021_3080502
crossref_primary_10_1109_RBME_2023_3296938
crossref_primary_10_1109_JBHI_2022_3198640
crossref_primary_10_1186_s12984_024_01526_3
crossref_primary_10_1109_TIFS_2024_3382606
crossref_primary_10_1016_j_inffus_2021_12_003
crossref_primary_10_1016_j_jsr_2023_08_008
crossref_primary_10_1016_j_artmed_2022_102314
crossref_primary_10_1016_j_eswa_2023_121224
crossref_primary_10_1109_JBHI_2024_3383598
crossref_primary_10_1093_jcde_qwab054
Cites_doi 10.1109/TCYB.2019.2934986
10.1109/CVPR.2017.451
10.1109/IJCNN.2019.8852347
10.1109/JSEN.2018.2839732
10.1038/s41598-019-38748-8
10.1109/ICCV.2015.494
10.1109/CVPR.2018.00566
10.1109/LRA.2019.2928775
10.1038/s41551-018-0305-z
10.1109/JBHI.2019.2938111
10.1007/s10462-016-9514-6
10.1109/MRA.2018.2852795
10.1016/j.inffus.2017.09.008
10.1016/j.gaitpost.2017.04.001
10.1016/j.humov.2019.102558
10.1109/ACCESS.2019.2916887
10.1016/j.bspc.2017.10.002
10.1016/j.cviu.2016.09.002
10.3390/s16010115
10.1109/BSN.2018.8329654
10.1109/TCYB.2017.2682280
10.1109/ICPR.2016.7899654
10.1109/CVPR.2017.143
10.1109/JBHI.2016.2608720
10.1109/FG.2015.7284881
10.1016/j.jbiomech.2010.01.027
10.1007/978-1-4471-6374-9
10.1109/JBHI.2018.2860780
10.1109/ICCV.2017.609
10.1109/TNNLS.2018.2851077
10.1109/TPAMI.2018.2798607
10.1109/CVPRW.2017.207
10.1007/s10586-018-1830-y
10.1109/I2MTC.2018.8409880
10.1186/s12891-016-1013-z
10.1109/MSP.2014.2347059
10.1145/1037957.1037963
10.3390/app9235245
10.1109/JBHI.2016.2636456
10.1109/JBHI.2019.2923209
10.1109/CVPR.2018.00901
10.1109/CVPR.2017.327
10.1109/CVPR.2019.00224
10.1016/S0966-6362(01)00100-X
10.1109/ICCV.2019.00153
10.1109/BIOCAS.2019.8919154
10.24963/ijcai.2017/263
10.1109/JBHI.2016.2636665
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2020.3009448
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList
Materials Research Database
MEDLINE
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 3
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 560
ExternalDocumentID 32726285
10_1109_TNNLS_2020_3009448
9152163
Genre orig-research
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: Newton Fund Institutional Links
  grantid: 330760239
  funderid: 10.13039/100010897
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
CGR
CUY
CVF
ECM
EIF
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c417t-cf372de3aa48a5f7bf21dffc66e7f3ddfa46159f32367b0b1eba0f5c52a3dd3c3
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Sat Sep 27 22:03:38 EDT 2025
Sun Jun 29 15:23:16 EDT 2025
Thu Jan 02 22:56:09 EST 2025
Tue Jul 01 00:27:34 EDT 2025
Thu Apr 24 22:49:15 EDT 2025
Wed Aug 27 05:47:13 EDT 2025
IsPeerReviewed false
IsScholarly true
Issue 2
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c417t-cf372de3aa48a5f7bf21dffc66e7f3ddfa46159f32367b0b1eba0f5c52a3dd3c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-3015-5818
0000-0003-1306-5017
0000-0002-5080-108X
0000-0001-8041-1245
PMID 32726285
PQID 2487436711
PQPubID 85436
PageCount 15
ParticipantIDs ieee_primary_9152163
crossref_citationtrail_10_1109_TNNLS_2020_3009448
proquest_miscellaneous_2429057862
pubmed_primary_32726285
crossref_primary_10_1109_TNNLS_2020_3009448
proquest_journals_2487436711
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-02-01
PublicationDateYYYYMMDD 2021-02-01
PublicationDate_xml – month: 02
  year: 2021
  text: 2021-02-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
ref56
ref12
ref59
ref15
ref58
ref14
ref52
ref55
ref11
ref54
ref10
ref17
ref18
aberman (ref39) 2019
liu (ref31) 2016
taylor (ref33) 2011; 12
ref50
li (ref35) 2017
ref46
ref45
ref48
bouchacourt (ref40) 2018
ref47
ref42
yan (ref38) 2018
ref41
ref43
dou (ref51) 2019
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref34
ref37
ref36
ordóñez (ref29) 2016; 16
ma (ref19) 2019
ref2
ref1
horst (ref23) 2019; 9
zou (ref27) 2018
muandet (ref53) 2013
ref24
ref26
ref25
ref20
pascanu (ref32) 2013
ref22
ref21
ref28
kidzi?ski (ref30) 2019; 14
sarafianos (ref16) 2016; 152
ref60
ref61
mor (ref44) 2018
References_xml – ident: ref46
  doi: 10.1109/TCYB.2019.2934986
– volume: 14
  year: 2019
  ident: ref30
  article-title: Automatic real-time gait event detection in children using deep neural networks
  publication-title: PLoS ONE
– ident: ref60
  doi: 10.1109/CVPR.2017.451
– volume: 12
  start-page: 1025
  year: 2011
  ident: ref33
  article-title: Two distributed-state models for generating high-dimensional time series
  publication-title: J Mach Learn Res
– ident: ref49
  doi: 10.1109/IJCNN.2019.8852347
– ident: ref20
  doi: 10.1109/JSEN.2018.2839732
– year: 2019
  ident: ref19
  article-title: M3D-GAN: Multi-modal multi-domain translation with universal attention
  publication-title: arXiv 1907 04378
– volume: 9
  year: 2019
  ident: ref23
  article-title: Explaining the unique nature of individual gait patterns with deep learning
  publication-title: Sci Rep
  doi: 10.1038/s41598-019-38748-8
– ident: ref34
  doi: 10.1109/ICCV.2015.494
– ident: ref54
  doi: 10.1109/CVPR.2018.00566
– ident: ref5
  doi: 10.1109/LRA.2019.2928775
– start-page: 1310
  year: 2013
  ident: ref32
  article-title: On the difficulty of training recurrent neural networks
  publication-title: Proc Int Conf Mach Learn
– ident: ref2
  doi: 10.1038/s41551-018-0305-z
– ident: ref7
  doi: 10.1109/JBHI.2019.2938111
– ident: ref22
  doi: 10.1007/s10462-016-9514-6
– ident: ref17
  doi: 10.1109/MRA.2018.2852795
– start-page: 10
  year: 2013
  ident: ref53
  article-title: Domain generalization via invariant feature representation
  publication-title: Proc Int Conf Mach Learn
– ident: ref3
  doi: 10.1016/j.inffus.2017.09.008
– ident: ref8
  doi: 10.1016/j.gaitpost.2017.04.001
– ident: ref58
  doi: 10.1016/j.humov.2019.102558
– year: 2019
  ident: ref39
  article-title: Learning character-agnostic motion for motion retargeting in 2D
  publication-title: arXiv 1905 01680
– ident: ref41
  doi: 10.1109/ACCESS.2019.2916887
– year: 2017
  ident: ref35
  article-title: Auto-conditioned recurrent networks for extended complex human motion synthesis
  publication-title: arXiv 1707 05363
– ident: ref14
  doi: 10.1016/j.bspc.2017.10.002
– volume: 152
  start-page: 1
  year: 2016
  ident: ref16
  article-title: 3D human pose estimation: A review of the literature and analysis of covariates
  publication-title: Comput Vis Image Understand
  doi: 10.1016/j.cviu.2016.09.002
– volume: 16
  start-page: 115
  year: 2016
  ident: ref29
  article-title: Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition
  publication-title: SENSORS
  doi: 10.3390/s16010115
– ident: ref4
  doi: 10.1109/BSN.2018.8329654
– start-page: 1058
  year: 2016
  ident: ref31
  article-title: Deep rehabilitation gait learning for modeling knee joints of lower-limb exoskeleton
  publication-title: Proc IEEE Int Conf Robot Biomimetics (RoBio)
– ident: ref48
  doi: 10.1109/TCYB.2017.2682280
– start-page: 265
  year: 2018
  ident: ref38
  article-title: MT-VAE: Learning motion transformations to generate multimodal human dynamics
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– ident: ref28
  doi: 10.1109/ICPR.2016.7899654
– ident: ref18
  doi: 10.1109/CVPR.2017.143
– year: 2018
  ident: ref44
  article-title: A universal music translation network
  publication-title: arXiv 1805 07848
– year: 2018
  ident: ref27
  article-title: Deep learning-based gait recognition using smartphones in the wild
  publication-title: arXiv 1811 00338
– ident: ref9
  doi: 10.1109/JBHI.2016.2608720
– ident: ref21
  doi: 10.1109/FG.2015.7284881
– ident: ref15
  doi: 10.1016/j.jbiomech.2010.01.027
– ident: ref10
  doi: 10.1007/978-1-4471-6374-9
– ident: ref13
  doi: 10.1109/JBHI.2018.2860780
– start-page: 6447
  year: 2019
  ident: ref51
  article-title: Domain generalization via model-agnostic learning of semantic features
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref55
  doi: 10.1109/ICCV.2017.609
– ident: ref45
  doi: 10.1109/TNNLS.2018.2851077
– ident: ref42
  doi: 10.1109/TPAMI.2018.2798607
– ident: ref25
  doi: 10.1109/CVPRW.2017.207
– start-page: 1
  year: 2018
  ident: ref40
  article-title: Multi-level variational autoencoder: Learning disentangled representations from grouped observations
  publication-title: Proc 32nd AAAI Conf Artif Intell
– ident: ref11
  doi: 10.1007/s10586-018-1830-y
– ident: ref6
  doi: 10.1109/I2MTC.2018.8409880
– ident: ref12
  doi: 10.1186/s12891-016-1013-z
– ident: ref52
  doi: 10.1109/MSP.2014.2347059
– ident: ref36
  doi: 10.1145/1037957.1037963
– ident: ref59
  doi: 10.3390/app9235245
– ident: ref24
  doi: 10.1109/JBHI.2016.2636456
– ident: ref26
  doi: 10.1109/JBHI.2019.2923209
– ident: ref37
  doi: 10.1109/CVPR.2018.00901
– ident: ref43
  doi: 10.1109/CVPR.2017.327
– ident: ref57
  doi: 10.1109/CVPR.2019.00224
– ident: ref61
  doi: 10.1016/S0966-6362(01)00100-X
– ident: ref56
  doi: 10.1109/ICCV.2019.00153
– ident: ref50
  doi: 10.1109/BIOCAS.2019.8919154
– ident: ref47
  doi: 10.24963/ijcai.2017/263
– ident: ref1
  doi: 10.1109/JBHI.2016.2636665
SSID ssj0000605649
Score 2.5572991
Snippet For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 546
SubjectTerms Abnormalities
Algorithms
Biomechanical Phenomena
Biometry
Body sensor network
Computer Systems
Data mining
Deep Learning
Electromyography
Feature extraction
Feature recognition
Gait
gait analysis
Gait Disorders, Neurologic - diagnosis
Gait recognition
Health care
Home Environment
Humans
Imaging, Three-Dimensional
Joints - diagnostic imaging
Kinematics
Lower Extremity - diagnostic imaging
model generalization
Motion capture
multimodal representation
Neural Networks, Computer
New technology
Noise measurement
Pattern recognition
Pattern Recognition, Automated - methods
Representations
Reproducibility of Results
Sensors
Skeleton
Wearable Electronic Devices
Wearable technology
Title Cross-Subject and Cross-Modal Transfer for Generalized Abnormal Gait Pattern Recognition
URI https://ieeexplore.ieee.org/document/9152163
https://www.ncbi.nlm.nih.gov/pubmed/32726285
https://www.proquest.com/docview/2487436711
https://www.proquest.com/docview/2429057862
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4BJy6lBdqGAnIlbq2XxHYSckQIiqruqiog7S3yU0KgBEH2wq_vjPOQitqKW5TYiZ0Ze77xvACOXAia7Edc6VJzZTLJUSgoHlTmURfz6Ums3jBfFJc36vsyX67B1ykWxnsfnc_8jC6jLd-1dkVHZccVCZtCrsM6slkfqzWdp6SIy4uIdrGJ4EKWyzFGJq2OrxeLH1eoDQpUUsmZTlGdPilKQRGEf4ikWGPl33Azip2LLZiPA-69Te5mq87M7POLXI6vndFbeDPgT3baM8w7WPPNNmyNtR3YsNR3YHlGY-W4r9BBDdONY_2deeuwfxRxATsg5mVD6urbZ-_YqWkIBd-zb_q2Yz9j9s6G_RrdlNpmF24uzq_PLvlQhYFblZUdt0GWwnmptTrReShNEBlS2BaFL4N0LmiFqKgKknLBmdRk3ug05DYXGp9KK9_DRtM2_iO5UVW5c9oaF1Ajt6Eio6nDTcBp3HisSCAbCVHbIUU5Vcq4r6OqklZ1pGNNdKwHOibwZerz0Cfo-G_rHSLC1HL4_wnsj_SuhzX8VAvU5RTOKcsS-Dw9xtVHJhXd-HZFbUSFiBfVwgQ-9HwyvXtkr72_f_MTbAryj4ke4Puw0T2u_AECnM4cRs7-Dcl59Fw
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcoALBcojUMBI3MDbxHaSzbGqKAvsrhBspb1FfkoVVVKV7KW_nhnnIYEAcYsSO7EzY883nhfAGxeCJvsRV7rUXJlMchQKigeVedTFfDqP1RtW62Jxrj5t8-0evJtiYbz30fnMz-gy2vJda3d0VHZckbAp5C24naNWMe-jtaYTlRSReRHxLjYSXMhyO0bJpNXxZr1efkN9UKCaSu50iir1SVEKiiH8RSjFKit_B5xR8JwdwGoccu9v8n2268zM3vyWzfF_53Qf7g0IlJ30LPMA9nzzEA7G6g5sWOyHsD2lsXLcWeiohunGsf7OqnXYPwq5gB0Q9bIhefXFjXfsxDSEgy_ZB33RsS8xf2fDvo6OSm3zCM7P3m9OF3yow8CtysqO2yBL4bzUWs11HkoTRIY0tkXhyyCdC1ohLqqCpGxwJjWZNzoNuc2FxqfSysew37SNf0qOVFXunLbGBdTJbajIbOpwG3Aatx4rEshGQtR2SFJOtTIu66ispFUd6VgTHeuBjgm8nfpc9Sk6_tn6kIgwtRz-fwJHI73rYRX_qAVqcwrnlGUJvJ4e4_ojo4pufLujNqJCzIuKYQJPej6Z3j2y17M_f_MV3FlsVst6-XH9-TncFeQtE_3Bj2C_u975Fwh3OvMycvlPy5H3rw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cross-Subject+and+Cross-Modal+Transfer+for+Generalized+Abnormal+Gait+Pattern+Recognition&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Gu%2C+Xiao&rft.au=Guo%2C+Yao&rft.au=Deligianni%2C+Fani&rft.au=Lo%2C+Benny&rft.date=2021-02-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=32&rft.issue=2&rft.spage=546&rft.epage=560&rft_id=info:doi/10.1109%2FTNNLS.2020.3009448&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TNNLS_2020_3009448
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon