Spatial Temporal Transformer Network for Skeleton-Based Action Recognition
Skeleton-based human action recognition has achieved a great interest in recent years, as skeleton data has been demonstrated to be robust to illumination changes, body scales, dynamic camera views, and complex background. Nevertheless, an effective encoding of the latent information underlying the...
Saved in:
| Published in | Pattern Recognition. ICPR International Workshops and Challenges Vol. 12663; pp. 694 - 701 |
|---|---|
| Main Authors | , , |
| Format | Book Chapter |
| Language | English |
| Published |
Switzerland
Springer International Publishing AG
2021
Springer International Publishing |
| Series | Lecture Notes in Computer Science |
| Subjects | |
| Online Access | Get full text |
| ISBN | 9783030687953 3030687953 |
| ISSN | 0302-9743 1611-3349 |
| DOI | 10.1007/978-3-030-68796-0_50 |
Cover
| Summary: | Skeleton-based human action recognition has achieved a great interest in recent years, as skeleton data has been demonstrated to be robust to illumination changes, body scales, dynamic camera views, and complex background. Nevertheless, an effective encoding of the latent information underlying the 3D skeleton is still an open problem. In this work, we propose a novel Spatial-Temporal Transformer network (ST-TR) which models dependencies between joints using the Transformer self-attention operator. In our ST-TR model, a Spatial Self-Attention module (SSA) is used to understand intra-frame interactions between different body parts, and a Temporal Self-Attention module (TSA) to model inter-frame correlations. The two are combined in a two-stream network which outperforms state-of-the-art models using the same input data on both NTU-RGB+D 60 and NTU-RGB+D 120. |
|---|---|
| ISBN: | 9783030687953 3030687953 |
| ISSN: | 0302-9743 1611-3349 |
| DOI: | 10.1007/978-3-030-68796-0_50 |