A Novel Deep Learning Scheme for Motor Imagery EEG Decoding Based on Spatial Representation Fusion

Motor imagery electroencephalography (MI-EEG), which is an important subfield of active brain-computer interface (BCI) systems, can be applied to help disabled people to consciously and directly control prosthesis or external devices, aiding them in certain daily activities. However, the low signal-...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 8; pp. 202100 - 202110
Main Authors Yang, Jun, Ma, Zhengmin, Wang, Jin, Fu, Yunfa
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2169-3536
2169-3536
DOI10.1109/ACCESS.2020.3035347

Cover

More Information
Summary:Motor imagery electroencephalography (MI-EEG), which is an important subfield of active brain-computer interface (BCI) systems, can be applied to help disabled people to consciously and directly control prosthesis or external devices, aiding them in certain daily activities. However, the low signal-to-noise ratio and spatial resolution make MI-EEG decoding a challenging task. Recently, some deep neural approaches have shown good improvements over state-of-the-art BCI methods. In this study, an end-to-end scheme that includes a multi-layer convolution neural network is constructed for an accurate spatial representation of multi-channel grouped MI-EEG signals, which is employed to extract the useful information present in a multi-channel MI signal. Then the invariant spatial representations are captured from across-subjects training for enhancing the generalization capability through a stacked sparse autoencoder framework, which is inspired by representative deep learning models. Furthermore, a quantitative experimental analysis is conducted on our private dataset and on a public BCI competition dataset. The results show the effectiveness and significance of the proposed methodology.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3035347