Deep Learning for EEG motor imagery classification based on multi-layer CNNs feature fusion

Electroencephalography (EEG) motor imagery (MI) signals have recently gained a lot of attention as these signals encode a person’s intent of performing an action. Researchers have used MI signals to help disabled persons, control devices such as wheelchairs and even for autonomous driving. Hence dec...

Full description

Saved in:
Bibliographic Details
Published inFuture generation computer systems Vol. 101; pp. 542 - 554
Main Authors Amin, Syed Umar, Alsulaiman, Mansour, Muhammad, Ghulam, Mekhtiche, Mohamed Amine, Shamim Hossain, M.
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.12.2019
Subjects
Online AccessGet full text
ISSN0167-739X
1872-7115
DOI10.1016/j.future.2019.06.027

Cover

More Information
Summary:Electroencephalography (EEG) motor imagery (MI) signals have recently gained a lot of attention as these signals encode a person’s intent of performing an action. Researchers have used MI signals to help disabled persons, control devices such as wheelchairs and even for autonomous driving. Hence decoding these signals accurately is important for a Brain–Computer interface (BCI) system. But EEG decoding is a challenging task because of its complexity, dynamic nature and low signal to noise ratio. Convolution neural network (CNN) has shown that it can extract spatial and temporal features from EEG, but in order to learn the dynamic correlations present in MI signals, we need improved CNN models. CNN can extract good features with both shallow and deep models pointing to the fact that, at different levels relevant features can be extracted. Fusion of multiple CNN models has not been experimented for EEG data. In this work, we propose a multi-layer CNNs method for fusing CNNs with different characteristics and architectures to improve EEG MI classification accuracy. Our method utilizes different convolutional features to capture spatial and temporal features from raw EEG data. We demonstrate that our novel MCNN and CCNN fusion methods outperforms all the state-of-the-art machine learning and deep learning techniques for EEG classification. We have performed various experiments to evaluate the performance of the proposed CNN fusion method on public datasets. The proposed MCNN method achieves 75.7% and 95.4% on the BCI Competition IV-2a dataset and the High Gamma Dataset respectively. The proposed CCNN method based on autoencoder cross-encoding achieves more than 10% improvement for cross-subject EEG classification. •Multi CNN models with different layers and filters for robust EEG feature extraction.•Fusion model for merging multiple CNNs for EEG classification.•Use of transfer learning and pretraining to further improve EEG decoding accuracy.•Autoencoders cross-subject feature reconstruction to achieve better results.
ISSN:0167-739X
1872-7115
DOI:10.1016/j.future.2019.06.027