Capturing Spatial Information for sEMG-Based Gesture Recognition Using Graph Attention Networks
Surface electromyography (sEMG) signals, known for their rich encapsulation of motion intention, have emerged as a promising avenue for human-computer interaction. Despite the advances in gesture recognition through machine learning, the omission of spatial information in sEMG signals has been ident...
Saved in:
Published in | 2024 IEEE Conference on Pervasive and Intelligent Computing (PICom) pp. 137 - 141 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
05.11.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/PICom64201.2024.00026 |
Cover
Summary: | Surface electromyography (sEMG) signals, known for their rich encapsulation of motion intention, have emerged as a promising avenue for human-computer interaction. Despite the advances in gesture recognition through machine learning, the omission of spatial information in sEMG signals has been identified as a limitation, often resulting in diminished accuracy and robustness of recognition systems. In this paper, we introduce the application of graph attention networks (GATs) to harness the spatial dynamics within sEMG signals. Initially, the sEMG signals are transformed into a graph-based structure through a meticulous preprocessing phase, where each electrode is represented as a node, and the spatial interplay among them is meticulously captured. Subsequently, the GATs are employed to conduct gesture recognition on this structured graph, with the networks adeptly adjusting the significance of weights between channels through the attention mechanism. This adaptive approach effectively captures the intricate spatial and temporal correlations embedded within the sEMG signals. On the CapgMyo dataset DB-a, our model demonstrates a consistent recognition accuracy of approximately 88.8%, surpassing previous machine learning models such as support vector machines (SVMs) and random forests (RFs) in recognition performance. |
---|---|
DOI: | 10.1109/PICom64201.2024.00026 |