Spatial transformer network on skeleton‐based gait recognition

Skeleton‐based gait recognition models suffer from the robustness problem, as the rank‐1 accuracy varies from 90% in normal walking cases to 70% in walking with coats cases. In this work, we propose a state‐of‐the‐art robust skeleton‐based gait recognition model called Gait‐TR, which is based on the...

Full description

Saved in:
Bibliographic Details
Published inExpert systems Vol. 40; no. 6
Main Authors Zhang, Cun, Chen, Xing‐Peng, Han, Guo‐Qiang, Liu, Xiang‐Jie
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.07.2023
Subjects
Online AccessGet full text
ISSN0266-4720
1468-0394
DOI10.1111/exsy.13244

Cover

More Information
Summary:Skeleton‐based gait recognition models suffer from the robustness problem, as the rank‐1 accuracy varies from 90% in normal walking cases to 70% in walking with coats cases. In this work, we propose a state‐of‐the‐art robust skeleton‐based gait recognition model called Gait‐TR, which is based on the combination of spatial transformer frameworks and temporal convolutional networks. Gait‐TR achieves substantial improvements over other skeleton‐based gait models with higher accuracy and better robustness on the well‐known gait dataset CASIA‐B. Particularly in walking with coats cases, Gait‐TR gets a ∼90% accuracy rate. This result is higher than the best result of silhouette‐based models, which usually have higher accuracy than the skeleton‐based gait recognition models. Moreover, our experiment on CASIA‐B shows that the spatial transformer network can extract gait features from the human skeleton better than the widely used graph convolutional network.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0266-4720
1468-0394
DOI:10.1111/exsy.13244