An Extended Spatial Transformer Convolutional Neural Network for Gesture Recognition and Self-Calibration Based on Sparse sEMG Electrodes

sEMG-based gesture recognition is widely applied in human-machine interaction system by its unique advantages. However, the accuracy of recognition drops significantly as electrodes shift. Besides, in applications such as VR, virtual hands should be shown in reasonable posture by self-calibration. W...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on biomedical circuits and systems Vol. 16; no. 6; pp. 1204 - 1215
Main Authors Chen, Wei, Feng, Lihui, Lu, Jihua, Wu, Bian
Format Journal Article
LanguageEnglish
Published United States IEEE 01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1932-4545
1940-9990
1940-9990
DOI10.1109/TBCAS.2022.3222196

Cover

More Information
Summary:sEMG-based gesture recognition is widely applied in human-machine interaction system by its unique advantages. However, the accuracy of recognition drops significantly as electrodes shift. Besides, in applications such as VR, virtual hands should be shown in reasonable posture by self-calibration. We propose an armband fusing sEMG and IMU with autonomously adjustable gain, and an extended spatial transformer convolutional neural network (EST-CNN) with feature enhanced pretreatment (FEP) to accomplish both gesture recognition and self-calibration via a one-shot processing. Different from anthropogenic calibration methods, spatial transformer layers (STL) in EST-CNN automatically learn the transformation relation, and explicitly express the rotational angle for coarse correction. Due to the shape change of feature pattern as rotational shift, we design the fine tuning layer (FTL) which is able to regulate rotational angle within 45°. By combining STL, FTL and IMU-based posture, EST-CNN is able to calculate non-discretized angle, and achieves high resolution of posture estimation based on sparse sEMG electrodes. Experiments collect frequently-used 3 gestures of 4 subjects in equidistant angles to evaluate EST-CNN. The results under electrodes shift show that the accuracy of gesture recognition is 97.06%, which is 5.81% higher than CNN, the fitness between estimated and true rotational angle is 99.44%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1932-4545
1940-9990
1940-9990
DOI:10.1109/TBCAS.2022.3222196